Class: AWS.DMS
- Inherits:
-
AWS.Service
- Object
- AWS.Service
- AWS.DMS
- Identifier:
- dms
- API Version:
- 2016-01-01
- Defined in:
- (unknown)
Overview
Constructs a service interface object. Each API operation is exposed as a function on service.
Service Description
Database Migration Service (DMS) can migrate your data to and from the most widely used commercial and open-source databases such as Oracle, PostgreSQL, Microsoft SQL Server, Amazon Redshift, MariaDB, Amazon Aurora, MySQL, and SAP Adaptive Server Enterprise (ASE). The service supports homogeneous migrations such as Oracle to Oracle, as well as heterogeneous migrations between different database platforms, such as Oracle to MySQL or SQL Server to PostgreSQL.
For more information about DMS, see What Is Database Migration Service? in the Database Migration Service User Guide.
Sending a Request Using DMS
var dms = new AWS.DMS();
dms.addTagsToResource(params, function (err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Locking the API Version
In order to ensure that the DMS object uses this specific API, you can
construct the object by passing the apiVersion
option to the constructor:
var dms = new AWS.DMS({apiVersion: '2016-01-01'});
You can also set the API version globally in AWS.config.apiVersions
using
the dms service identifier:
AWS.config.apiVersions = {
dms: '2016-01-01',
// other service API versions
};
var dms = new AWS.DMS();
Version:
-
2016-01-01
Waiter Resource States
This service supports a list of resource states that can be polled using the waitFor() method. The resource states are:
testConnectionSucceeds, endpointDeleted, replicationInstanceAvailable, replicationInstanceDeleted, replicationTaskReady, replicationTaskStopped, replicationTaskRunning, replicationTaskDeleted
Constructor Summary collapse
-
new AWS.DMS(options = {}) ⇒ Object
constructor
Constructs a service object.
Property Summary collapse
-
endpoint ⇒ AWS.Endpoint
readwrite
An Endpoint object representing the endpoint URL for service requests.
Properties inherited from AWS.Service
Method Summary collapse
-
addTagsToResource(params = {}, callback) ⇒ AWS.Request
Adds metadata tags to an DMS resource, including replication instance, endpoint, security group, and migration task.
-
applyPendingMaintenanceAction(params = {}, callback) ⇒ AWS.Request
Applies a pending maintenance action to a resource (for example, to a replication instance).
.
-
cancelReplicationTaskAssessmentRun(params = {}, callback) ⇒ AWS.Request
Cancels a single premigration assessment run.
This operation prevents any individual assessments from running if they haven't started running.
-
createEndpoint(params = {}, callback) ⇒ AWS.Request
Creates an endpoint using the provided settings.
Note: For a MySQL source or target endpoint, don't explicitly specify the database using theDatabaseName
request parameter on theCreateEndpoint
API call.- createEventSubscription(params = {}, callback) ⇒ AWS.Request
Creates an DMS event notification subscription.
- createReplicationInstance(params = {}, callback) ⇒ AWS.Request
Creates the replication instance using the specified parameters.
DMS requires that your account have certain roles with appropriate permissions before you can create a replication instance.
- createReplicationSubnetGroup(params = {}, callback) ⇒ AWS.Request
Creates a replication subnet group given a list of the subnet IDs in a VPC.
The VPC needs to have at least one subnet in at least two availability zones in the Amazon Web Services Region, otherwise the service will throw a
.ReplicationSubnetGroupDoesNotCoverEnoughAZs
exception.- createReplicationTask(params = {}, callback) ⇒ AWS.Request
Creates a replication task using the specified parameters.
.
- deleteCertificate(params = {}, callback) ⇒ AWS.Request
Deletes the specified certificate.
- deleteConnection(params = {}, callback) ⇒ AWS.Request
Deletes the connection between a replication instance and an endpoint.
.
- deleteEndpoint(params = {}, callback) ⇒ AWS.Request
Deletes the specified endpoint.
Note: All tasks associated with the endpoint must be deleted before you can delete the endpoint.- deleteEventSubscription(params = {}, callback) ⇒ AWS.Request
Deletes an DMS event subscription.
- deleteReplicationInstance(params = {}, callback) ⇒ AWS.Request
Deletes the specified replication instance.
Note: You must delete any migration tasks that are associated with the replication instance before you can delete it.- deleteReplicationSubnetGroup(params = {}, callback) ⇒ AWS.Request
Deletes a subnet group.
.
- deleteReplicationTask(params = {}, callback) ⇒ AWS.Request
Deletes the specified replication task.
.
- deleteReplicationTaskAssessmentRun(params = {}, callback) ⇒ AWS.Request
Deletes the record of a single premigration assessment run.
This operation removes all metadata that DMS maintains about this assessment run.
- describeAccountAttributes(params = {}, callback) ⇒ AWS.Request
Lists all of the DMS attributes for a customer account.
- describeApplicableIndividualAssessments(params = {}, callback) ⇒ AWS.Request
Provides a list of individual assessments that you can specify for a new premigration assessment run, given one or more parameters.
If you specify an existing migration task, this operation provides the default individual assessments you can specify for that task.
- describeCertificates(params = {}, callback) ⇒ AWS.Request
Provides a description of the certificate.
.
- describeConnections(params = {}, callback) ⇒ AWS.Request
Describes the status of the connections that have been made between the replication instance and an endpoint.
- describeEndpoints(params = {}, callback) ⇒ AWS.Request
Returns information about the endpoints for your account in the current region.
.
- describeEndpointSettings(params = {}, callback) ⇒ AWS.Request
Returns information about the possible endpoint settings available when you create an endpoint for a specific database engine.
.
- describeEndpointTypes(params = {}, callback) ⇒ AWS.Request
Returns information about the type of endpoints available.
.
- describeEventCategories(params = {}, callback) ⇒ AWS.Request
Lists categories for all event source types, or, if specified, for a specified source type.
- describeEvents(params = {}, callback) ⇒ AWS.Request
Lists events for a given source identifier and source type.
- describeEventSubscriptions(params = {}, callback) ⇒ AWS.Request
Lists all the event subscriptions for a customer account.
- describeOrderableReplicationInstances(params = {}, callback) ⇒ AWS.Request
Returns information about the replication instance types that can be created in the specified region.
.
- describePendingMaintenanceActions(params = {}, callback) ⇒ AWS.Request
For internal use only
.
- describeRefreshSchemasStatus(params = {}, callback) ⇒ AWS.Request
Returns the status of the RefreshSchemas operation.
.
- describeReplicationInstances(params = {}, callback) ⇒ AWS.Request
Returns information about replication instances for your account in the current region.
.
- describeReplicationInstanceTaskLogs(params = {}, callback) ⇒ AWS.Request
Returns information about the task logs for the specified task.
.
- describeReplicationSubnetGroups(params = {}, callback) ⇒ AWS.Request
Returns information about the replication subnet groups.
.
- describeReplicationTaskAssessmentResults(params = {}, callback) ⇒ AWS.Request
Returns the task assessment results from the Amazon S3 bucket that DMS creates in your Amazon Web Services account.
- describeReplicationTaskAssessmentRuns(params = {}, callback) ⇒ AWS.Request
Returns a paginated list of premigration assessment runs based on filter settings.
These filter settings can specify a combination of premigration assessment runs, migration tasks, replication instances, and assessment run status values.
Note: This operation doesn't return information about individual assessments.- describeReplicationTaskIndividualAssessments(params = {}, callback) ⇒ AWS.Request
Returns a paginated list of individual assessments based on filter settings.
These filter settings can specify a combination of premigration assessment runs, migration tasks, and assessment status values.
.- describeReplicationTasks(params = {}, callback) ⇒ AWS.Request
Returns information about replication tasks for your account in the current region.
.
- describeSchemas(params = {}, callback) ⇒ AWS.Request
Returns information about the schema for the specified endpoint.
- describeTableStatistics(params = {}, callback) ⇒ AWS.Request
Returns table statistics on the database migration task, including table name, rows inserted, rows updated, and rows deleted.
Note that the "last updated" column the DMS console only indicates the time that DMS last updated the table statistics record for a table.
- importCertificate(params = {}, callback) ⇒ AWS.Request
Uploads the specified certificate.
.
- listTagsForResource(params = {}, callback) ⇒ AWS.Request
Lists all metadata tags attached to an DMS resource, including replication instance, endpoint, security group, and migration task.
- modifyEndpoint(params = {}, callback) ⇒ AWS.Request
Modifies the specified endpoint.
Note: For a MySQL source or target endpoint, don't explicitly specify the database using theDatabaseName
request parameter on theModifyEndpoint
API call.- modifyEventSubscription(params = {}, callback) ⇒ AWS.Request
Modifies an existing DMS event notification subscription.
- modifyReplicationInstance(params = {}, callback) ⇒ AWS.Request
Modifies the replication instance to apply new settings.
- modifyReplicationSubnetGroup(params = {}, callback) ⇒ AWS.Request
Modifies the settings for the specified replication subnet group.
.
- modifyReplicationTask(params = {}, callback) ⇒ AWS.Request
Modifies the specified replication task.
You can't modify the task endpoints.
- moveReplicationTask(params = {}, callback) ⇒ AWS.Request
Moves a replication task from its current replication instance to a different target replication instance using the specified parameters.
- rebootReplicationInstance(params = {}, callback) ⇒ AWS.Request
Reboots a replication instance.
- refreshSchemas(params = {}, callback) ⇒ AWS.Request
Populates the schema for the specified endpoint.
- reloadTables(params = {}, callback) ⇒ AWS.Request
Reloads the target database table with the source data.
- removeTagsFromResource(params = {}, callback) ⇒ AWS.Request
Removes metadata tags from an DMS resource, including replication instance, endpoint, security group, and migration task.
- startReplicationTask(params = {}, callback) ⇒ AWS.Request
Starts the replication task.
For more information about DMS tasks, see Working with Migration Tasks in the Database Migration Service User Guide.
.- startReplicationTaskAssessment(params = {}, callback) ⇒ AWS.Request
Starts the replication task assessment for unsupported data types in the source database.
- startReplicationTaskAssessmentRun(params = {}, callback) ⇒ AWS.Request
Starts a new premigration assessment run for one or more individual assessments of a migration task.
The assessments that you can specify depend on the source and target database engine and the migration type defined for the given task.
- stopReplicationTask(params = {}, callback) ⇒ AWS.Request
Stops the replication task.
.
- testConnection(params = {}, callback) ⇒ AWS.Request
Tests the connection between the replication instance and the endpoint.
.
- waitFor(state, params = {}, callback) ⇒ AWS.Request
Waits for a given DMS resource.
Methods inherited from AWS.Service
makeRequest, makeUnauthenticatedRequest, setupRequestListeners, defineService
Constructor Details
new AWS.DMS(options = {}) ⇒ Object
Constructs a service object. This object has one method for each API operation.
Examples:
Constructing a DMS object
var dms = new AWS.DMS({apiVersion: '2016-01-01'});
Options Hash (options):
-
params
(map)
—
An optional map of parameters to bind to every request sent by this service object. For more information on bound parameters, see "Working with Services" in the Getting Started Guide.
-
endpoint
(String|AWS.Endpoint)
—
The endpoint URI to send requests to. The default endpoint is built from the configured
region
. The endpoint should be a string like'https://{service}.{region}.amazonaws.com'
or an Endpoint object. -
accessKeyId
(String)
—
your AWS access key ID.
-
secretAccessKey
(String)
—
your AWS secret access key.
-
sessionToken
(AWS.Credentials)
—
the optional AWS session token to sign requests with.
-
credentials
(AWS.Credentials)
—
the AWS credentials to sign requests with. You can either specify this object, or specify the accessKeyId and secretAccessKey options directly.
-
credentialProvider
(AWS.CredentialProviderChain)
—
the provider chain used to resolve credentials if no static
credentials
property is set. -
region
(String)
—
the region to send service requests to. See AWS.DMS.region for more information.
-
maxRetries
(Integer)
—
the maximum amount of retries to attempt with a request. See AWS.DMS.maxRetries for more information.
-
maxRedirects
(Integer)
—
the maximum amount of redirects to follow with a request. See AWS.DMS.maxRedirects for more information.
-
sslEnabled
(Boolean)
—
whether to enable SSL for requests.
-
paramValidation
(Boolean|map)
—
whether input parameters should be validated against the operation description before sending the request. Defaults to true. Pass a map to enable any of the following specific validation features:
- min [Boolean] — Validates that a value meets the min
constraint. This is enabled by default when paramValidation is set
to
true
. - max [Boolean] — Validates that a value meets the max constraint.
- pattern [Boolean] — Validates that a string value matches a regular expression.
- enum [Boolean] — Validates that a string value matches one of the allowable enum values.
- min [Boolean] — Validates that a value meets the min
constraint. This is enabled by default when paramValidation is set
to
-
computeChecksums
(Boolean)
—
whether to compute checksums for payload bodies when the service accepts it (currently supported in S3 only)
-
convertResponseTypes
(Boolean)
—
whether types are converted when parsing response data. Currently only supported for JSON based services. Turning this off may improve performance on large response payloads. Defaults to
true
. -
correctClockSkew
(Boolean)
—
whether to apply a clock skew correction and retry requests that fail because of an skewed client clock. Defaults to
false
. -
s3ForcePathStyle
(Boolean)
—
whether to force path style URLs for S3 objects.
-
s3BucketEndpoint
(Boolean)
—
whether the provided endpoint addresses an individual bucket (false if it addresses the root API endpoint). Note that setting this configuration option requires an
endpoint
to be provided explicitly to the service constructor. -
s3DisableBodySigning
(Boolean)
—
whether S3 body signing should be disabled when using signature version
v4
. Body signing can only be disabled when using https. Defaults totrue
. -
s3UsEast1RegionalEndpoint
('legacy'|'regional')
—
when region is set to 'us-east-1', whether to send s3 request to global endpoints or 'us-east-1' regional endpoints. This config is only applicable to S3 client. Defaults to
legacy
-
s3UseArnRegion
(Boolean)
—
whether to override the request region with the region inferred from requested resource's ARN. Only available for S3 buckets Defaults to
true
-
retryDelayOptions
(map)
—
A set of options to configure the retry delay on retryable errors. Currently supported options are:
- base [Integer] — The base number of milliseconds to use in the exponential backoff for operation retries. Defaults to 100 ms for all services except DynamoDB, where it defaults to 50ms.
- customBackoff [function] — A custom function that accepts a
retry count and error and returns the amount of time to delay in
milliseconds. If the result is a non-zero negative value, no further
retry attempts will be made. The
base
option will be ignored if this option is supplied. The function is only called for retryable errors.
-
httpOptions
(map)
—
A set of options to pass to the low-level HTTP request. Currently supported options are:
- proxy [String] — the URL to proxy requests through
- agent [http.Agent, https.Agent] — the Agent object to perform
HTTP requests with. Used for connection pooling. Defaults to the global
agent (
http.globalAgent
) for non-SSL connections. Note that for SSL connections, a special Agent object is used in order to enable peer certificate verification. This feature is only available in the Node.js environment. - connectTimeout [Integer] — Sets the socket to timeout after
failing to establish a connection with the server after
connectTimeout
milliseconds. This timeout has no effect once a socket connection has been established. - timeout [Integer] — Sets the socket to timeout after timeout milliseconds of inactivity on the socket. Defaults to two minutes (120000).
- xhrAsync [Boolean] — Whether the SDK will send asynchronous HTTP requests. Used in the browser environment only. Set to false to send requests synchronously. Defaults to true (async on).
- xhrWithCredentials [Boolean] — Sets the "withCredentials" property of an XMLHttpRequest object. Used in the browser environment only. Defaults to false.
-
apiVersion
(String, Date)
—
a String in YYYY-MM-DD format (or a date) that represents the latest possible API version that can be used in all services (unless overridden by
apiVersions
). Specify 'latest' to use the latest possible version. -
apiVersions
(map<String, String|Date>)
—
a map of service identifiers (the lowercase service class name) with the API version to use when instantiating a service. Specify 'latest' for each individual that can use the latest available version.
-
logger
(#write, #log)
—
an object that responds to .write() (like a stream) or .log() (like the console object) in order to log information about requests
-
systemClockOffset
(Number)
—
an offset value in milliseconds to apply to all signing times. Use this to compensate for clock skew when your system may be out of sync with the service time. Note that this configuration option can only be applied to the global
AWS.config
object and cannot be overridden in service-specific configuration. Defaults to 0 milliseconds. -
signatureVersion
(String)
—
the signature version to sign requests with (overriding the API configuration). Possible values are: 'v2', 'v3', 'v4'.
-
signatureCache
(Boolean)
—
whether the signature to sign requests with (overriding the API configuration) is cached. Only applies to the signature version 'v4'. Defaults to
true
. -
dynamoDbCrc32
(Boolean)
—
whether to validate the CRC32 checksum of HTTP response bodies returned by DynamoDB. Default:
true
. -
useAccelerateEndpoint
(Boolean)
—
Whether to use the S3 Transfer Acceleration endpoint with the S3 service. Default:
false
. -
clientSideMonitoring
(Boolean)
—
whether to collect and publish this client's performance metrics of all its API requests.
-
endpointDiscoveryEnabled
(Boolean|undefined)
—
whether to call operations with endpoints given by service dynamically. Setting this
-
endpointCacheSize
(Number)
—
the size of the global cache storing endpoints from endpoint discovery operations. Once endpoint cache is created, updating this setting cannot change existing cache size. Defaults to 1000
-
hostPrefixEnabled
(Boolean)
—
whether to marshal request parameters to the prefix of hostname. Defaults to
true
. -
stsRegionalEndpoints
('legacy'|'regional')
—
whether to send sts request to global endpoints or regional endpoints. Defaults to 'legacy'.
Property Details
Method Details
addTagsToResource(params = {}, callback) ⇒ AWS.Request
Adds metadata tags to an DMS resource, including replication instance, endpoint, security group, and migration task. These tags can also be used with cost allocation reporting to track cost associated with DMS resources, or used in a Condition statement in an IAM policy for DMS. For more information, see
Tag
data type description.Service Reference:
Examples:
Add tags to resource
/* Adds metadata tags to an AWS DMS resource, including replication instance, endpoint, security group, and migration task. These tags can also be used with cost allocation reporting to track cost associated with AWS DMS resources, or used in a Condition statement in an IAM policy for AWS DMS. */ var params = { ResourceArn: "arn:aws:dms:us-east-1:123456789012:endpoint:ASXWXJZLNWNT5HTWCGV2BUJQ7E", // Required. Use the ARN of the resource you want to tag. Tags: [ { Key: "Acount", Value: "1633456" } ]// Required. Use the Key/Value pair format. }; dms.addTagsToResource(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response /* data = { } */ });
Calling the addTagsToResource operation
var params = { ResourceArn: 'STRING_VALUE', /* required */ Tags: [ /* required */ { Key: 'STRING_VALUE', ResourceArn: 'STRING_VALUE', Value: 'STRING_VALUE' }, /* more items */ ] }; dms.addTagsToResource(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
ResourceArn
— (String
)Identifies the DMS resource to which tags should be added. The value for this parameter is an Amazon Resource Name (ARN).
For DMS, you can tag a replication instance, an endpoint, or a replication task.
Tags
— (Array<map>
)One or more tags to be assigned to the resource.
Key
— (String
)A key is the required name of the tag. The string value can be 1-128 Unicode characters in length and can't be prefixed with "aws:" or "dms:". The string can only contain only the set of Unicode letters, digits, white-space, '', '.', '/', '=', '+', '-' (Java regular expressions: "^([\p
{L}\\p{Z}\\p{N}
.:/=+\-]*)$").Value
— (String
)A value is the optional value of the tag. The string value can be 1-256 Unicode characters in length and can't be prefixed with "aws:" or "dms:". The string can only contain only the set of Unicode letters, digits, white-space, '', '.', '/', '=', '+', '-' (Java regular expressions: "^([\p
{L}\\p{Z}\\p{N}
.:/=+\-]*)$").ResourceArn
— (String
)The Amazon Resource Name (ARN) string that uniquely identifies the resource for which the tag is created.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs.
-
(AWS.Response)
—
Returns:
applyPendingMaintenanceAction(params = {}, callback) ⇒ AWS.Request
Applies a pending maintenance action to a resource (for example, to a replication instance).
Service Reference:
Examples:
Calling the applyPendingMaintenanceAction operation
var params = { ApplyAction: 'STRING_VALUE', /* required */ OptInType: 'STRING_VALUE', /* required */ ReplicationInstanceArn: 'STRING_VALUE' /* required */ }; dms.applyPendingMaintenanceAction(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
ReplicationInstanceArn
— (String
)The Amazon Resource Name (ARN) of the DMS resource that the pending maintenance action applies to.
ApplyAction
— (String
)The pending maintenance action to apply to this resource.
Valid values:
os-upgrade
,system-update
,db-upgrade
OptInType
— (String
)A value that specifies the type of opt-in request, or undoes an opt-in request. You can't undo an opt-in request of type
immediate
.Valid values:
-
immediate
- Apply the maintenance action immediately. -
next-maintenance
- Apply the maintenance action during the next maintenance window for the resource. -
undo-opt-in
- Cancel any existingnext-maintenance
opt-in requests.
-
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:ResourcePendingMaintenanceActions
— (map
)The DMS resource that the pending maintenance action will be applied to.
ResourceIdentifier
— (String
)The Amazon Resource Name (ARN) of the DMS resource that the pending maintenance action applies to. For information about creating an ARN, see Constructing an Amazon Resource Name (ARN) for DMS in the DMS documentation.
PendingMaintenanceActionDetails
— (Array<map>
)Detailed information about the pending maintenance action.
Action
— (String
)The type of pending maintenance action that is available for the resource.
AutoAppliedAfterDate
— (Date
)The date of the maintenance window when the action is to be applied. The maintenance action is applied to the resource during its first maintenance window after this date. If this date is specified, any
next-maintenance
opt-in requests are ignored.ForcedApplyDate
— (Date
)The date when the maintenance action will be automatically applied. The maintenance action is applied to the resource on this date regardless of the maintenance window for the resource. If this date is specified, any
immediate
opt-in requests are ignored.OptInStatus
— (String
)The type of opt-in request that has been received for the resource.
CurrentApplyDate
— (Date
)The effective date when the pending maintenance action will be applied to the resource. This date takes into account opt-in requests received from the
ApplyPendingMaintenanceAction
API operation, and also theAutoAppliedAfterDate
andForcedApplyDate
parameter values. This value is blank if an opt-in request has not been received and nothing has been specified forAutoAppliedAfterDate
orForcedApplyDate
.Description
— (String
)A description providing more detail about the maintenance action.
-
(AWS.Response)
—
Returns:
cancelReplicationTaskAssessmentRun(params = {}, callback) ⇒ AWS.Request
Cancels a single premigration assessment run.
This operation prevents any individual assessments from running if they haven't started running. It also attempts to cancel any individual assessments that are currently running.
Service Reference:
Examples:
Calling the cancelReplicationTaskAssessmentRun operation
var params = { ReplicationTaskAssessmentRunArn: 'STRING_VALUE' /* required */ }; dms.cancelReplicationTaskAssessmentRun(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
ReplicationTaskAssessmentRunArn
— (String
)Amazon Resource Name (ARN) of the premigration assessment run to be canceled.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:ReplicationTaskAssessmentRun
— (map
)The
ReplicationTaskAssessmentRun
object for the canceled assessment run.ReplicationTaskAssessmentRunArn
— (String
)Amazon Resource Name (ARN) of this assessment run.
ReplicationTaskArn
— (String
)ARN of the migration task associated with this premigration assessment run.
Status
— (String
)Assessment run status.
This status can have one of the following values:
-
"cancelling"
– The assessment run was canceled by theCancelReplicationTaskAssessmentRun
operation. -
"deleting"
– The assessment run was deleted by theDeleteReplicationTaskAssessmentRun
operation. -
"failed"
– At least one individual assessment completed with afailed
status. -
"error-provisioning"
– An internal error occurred while resources were provisioned (duringprovisioning
status). -
"error-executing"
– An internal error occurred while individual assessments ran (duringrunning
status). -
"invalid state"
– The assessment run is in an unknown state. -
"passed"
– All individual assessments have completed, and none has afailed
status. -
"provisioning"
– Resources required to run individual assessments are being provisioned. -
"running"
– Individual assessments are being run. -
"starting"
– The assessment run is starting, but resources are not yet being provisioned for individual assessments.
-
ReplicationTaskAssessmentRunCreationDate
— (Date
)Date on which the assessment run was created using the
StartReplicationTaskAssessmentRun
operation.AssessmentProgress
— (map
)Indication of the completion progress for the individual assessments specified to run.
IndividualAssessmentCount
— (Integer
)The number of individual assessments that are specified to run.
IndividualAssessmentCompletedCount
— (Integer
)The number of individual assessments that have completed, successfully or not.
LastFailureMessage
— (String
)Last message generated by an individual assessment failure.
ServiceAccessRoleArn
— (String
)ARN of the service role used to start the assessment run using the
StartReplicationTaskAssessmentRun
operation. The role must allow theiam:PassRole
action.ResultLocationBucket
— (String
)Amazon S3 bucket where DMS stores the results of this assessment run.
ResultLocationFolder
— (String
)Folder in an Amazon S3 bucket where DMS stores the results of this assessment run.
ResultEncryptionMode
— (String
)Encryption mode used to encrypt the assessment run results.
ResultKmsKeyArn
— (String
)ARN of the KMS encryption key used to encrypt the assessment run results.
AssessmentRunName
— (String
)Unique name of the assessment run.
-
(AWS.Response)
—
Returns:
createEndpoint(params = {}, callback) ⇒ AWS.Request
Creates an endpoint using the provided settings.
Note: For a MySQL source or target endpoint, don't explicitly specify the database using theDatabaseName
request parameter on theCreateEndpoint
API call. SpecifyingDatabaseName
when you create a MySQL endpoint replicates all the task tables to this single database. For MySQL endpoints, you specify the database only when you specify the schema in the table-mapping rules of the DMS task.Service Reference:
Examples:
Create endpoint
/* Creates an endpoint using the provided settings. */ var params = { CertificateArn: "", DatabaseName: "testdb", EndpointIdentifier: "test-endpoint-1", EndpointType: "source", EngineName: "mysql", ExtraConnectionAttributes: "", KmsKeyId: "arn:aws:kms:us-east-1:123456789012:key/4c1731d6-5435-ed4d-be13-d53411a7cfbd", Password: "pasword", Port: 3306, ServerName: "mydb.cx1llnox7iyx.us-west-2.rds.amazonaws.com", SslMode: "require", Tags: [ { Key: "Acount", Value: "143327655" } ], Username: "username" }; dms.createEndpoint(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response /* data = { Endpoint: { EndpointArn: "arn:aws:dms:us-east-1:123456789012:endpoint:RAAR3R22XSH46S3PWLC3NJAWKM", EndpointIdentifier: "test-endpoint-1", EndpointType: "source", EngineName: "mysql", KmsKeyId: "arn:aws:kms:us-east-1:123456789012:key/4c1731d6-5435-ed4d-be13-d53411a7cfbd", Port: 3306, ServerName: "mydb.cx1llnox7iyx.us-west-2.rds.amazonaws.com", Status: "active", Username: "username" } } */ });
Calling the createEndpoint operation
var params = { EndpointIdentifier: 'STRING_VALUE', /* required */ EndpointType: source | target, /* required */ EngineName: 'STRING_VALUE', /* required */ CertificateArn: 'STRING_VALUE', DatabaseName: 'STRING_VALUE', DmsTransferSettings: { BucketName: 'STRING_VALUE', ServiceAccessRoleArn: 'STRING_VALUE' }, DocDbSettings: { DatabaseName: 'STRING_VALUE', DocsToInvestigate: 'NUMBER_VALUE', ExtractDocId: true || false, KmsKeyId: 'STRING_VALUE', NestingLevel: none | one, Password: 'STRING_VALUE', Port: 'NUMBER_VALUE', SecretsManagerAccessRoleArn: 'STRING_VALUE', SecretsManagerSecretId: 'STRING_VALUE', ServerName: 'STRING_VALUE', Username: 'STRING_VALUE' }, DynamoDbSettings: { ServiceAccessRoleArn: 'STRING_VALUE' /* required */ }, ElasticsearchSettings: { EndpointUri: 'STRING_VALUE', /* required */ ServiceAccessRoleArn: 'STRING_VALUE', /* required */ ErrorRetryDuration: 'NUMBER_VALUE', FullLoadErrorPercentage: 'NUMBER_VALUE' }, ExternalTableDefinition: 'STRING_VALUE', ExtraConnectionAttributes: 'STRING_VALUE', IBMDb2Settings: { CurrentLsn: 'STRING_VALUE', DatabaseName: 'STRING_VALUE', MaxKBytesPerRead: 'NUMBER_VALUE', Password: 'STRING_VALUE', Port: 'NUMBER_VALUE', SecretsManagerAccessRoleArn: 'STRING_VALUE', SecretsManagerSecretId: 'STRING_VALUE', ServerName: 'STRING_VALUE', SetDataCaptureChanges: true || false, Username: 'STRING_VALUE' }, KafkaSettings: { Broker: 'STRING_VALUE', IncludeControlDetails: true || false, IncludeNullAndEmpty: true || false, IncludePartitionValue: true || false, IncludeTableAlterOperations: true || false, IncludeTransactionDetails: true || false, MessageFormat: json | json-unformatted, MessageMaxBytes: 'NUMBER_VALUE', NoHexPrefix: true || false, PartitionIncludeSchemaTable: true || false, SaslPassword: 'STRING_VALUE', SaslUsername: 'STRING_VALUE', SecurityProtocol: plaintext | ssl-authentication | ssl-encryption | sasl-ssl, SslCaCertificateArn: 'STRING_VALUE', SslClientCertificateArn: 'STRING_VALUE', SslClientKeyArn: 'STRING_VALUE', SslClientKeyPassword: 'STRING_VALUE', Topic: 'STRING_VALUE' }, KinesisSettings: { IncludeControlDetails: true || false, IncludeNullAndEmpty: true || false, IncludePartitionValue: true || false, IncludeTableAlterOperations: true || false, IncludeTransactionDetails: true || false, MessageFormat: json | json-unformatted, NoHexPrefix: true || false, PartitionIncludeSchemaTable: true || false, ServiceAccessRoleArn: 'STRING_VALUE', StreamArn: 'STRING_VALUE' }, KmsKeyId: 'STRING_VALUE', MicrosoftSQLServerSettings: { BcpPacketSize: 'NUMBER_VALUE', ControlTablesFileGroup: 'STRING_VALUE', DatabaseName: 'STRING_VALUE', Password: 'STRING_VALUE', Port: 'NUMBER_VALUE', QuerySingleAlwaysOnNode: true || false, ReadBackupOnly: true || false, SafeguardPolicy: rely-on-sql-server-replication-agent | exclusive-automatic-truncation | shared-automatic-truncation, SecretsManagerAccessRoleArn: 'STRING_VALUE', SecretsManagerSecretId: 'STRING_VALUE', ServerName: 'STRING_VALUE', UseBcpFullLoad: true || false, UseThirdPartyBackupDevice: true || false, Username: 'STRING_VALUE' }, MongoDbSettings: { AuthMechanism: default | mongodb_cr | scram_sha_1, AuthSource: 'STRING_VALUE', AuthType: no | password, DatabaseName: 'STRING_VALUE', DocsToInvestigate: 'STRING_VALUE', ExtractDocId: 'STRING_VALUE', KmsKeyId: 'STRING_VALUE', NestingLevel: none | one, Password: 'STRING_VALUE', Port: 'NUMBER_VALUE', SecretsManagerAccessRoleArn: 'STRING_VALUE', SecretsManagerSecretId: 'STRING_VALUE', ServerName: 'STRING_VALUE', Username: 'STRING_VALUE' }, MySQLSettings: { AfterConnectScript: 'STRING_VALUE', CleanSourceMetadataOnMismatch: true || false, DatabaseName: 'STRING_VALUE', EventsPollInterval: 'NUMBER_VALUE', MaxFileSize: 'NUMBER_VALUE', ParallelLoadThreads: 'NUMBER_VALUE', Password: 'STRING_VALUE', Port: 'NUMBER_VALUE', SecretsManagerAccessRoleArn: 'STRING_VALUE', SecretsManagerSecretId: 'STRING_VALUE', ServerName: 'STRING_VALUE', ServerTimezone: 'STRING_VALUE', TargetDbType: specific-database | multiple-databases, Username: 'STRING_VALUE' }, NeptuneSettings: { S3BucketFolder: 'STRING_VALUE', /* required */ S3BucketName: 'STRING_VALUE', /* required */ ErrorRetryDuration: 'NUMBER_VALUE', IamAuthEnabled: true || false, MaxFileSize: 'NUMBER_VALUE', MaxRetryCount: 'NUMBER_VALUE', ServiceAccessRoleArn: 'STRING_VALUE' }, OracleSettings: { AccessAlternateDirectly: true || false, AddSupplementalLogging: true || false, AdditionalArchivedLogDestId: 'NUMBER_VALUE', AllowSelectNestedTables: true || false, ArchivedLogDestId: 'NUMBER_VALUE', ArchivedLogsOnly: true || false, AsmPassword: 'STRING_VALUE', AsmServer: 'STRING_VALUE', AsmUser: 'STRING_VALUE', CharLengthSemantics: default | char | byte, DatabaseName: 'STRING_VALUE', DirectPathNoLog: true || false, DirectPathParallelLoad: true || false, EnableHomogenousTablespace: true || false, ExtraArchivedLogDestIds: [ 'NUMBER_VALUE', /* more items */ ], FailTasksOnLobTruncation: true || false, NumberDatatypeScale: 'NUMBER_VALUE', OraclePathPrefix: 'STRING_VALUE', ParallelAsmReadThreads: 'NUMBER_VALUE', Password: 'STRING_VALUE', Port: 'NUMBER_VALUE', ReadAheadBlocks: 'NUMBER_VALUE', ReadTableSpaceName: true || false, ReplacePathPrefix: true || false, RetryInterval: 'NUMBER_VALUE', SecretsManagerAccessRoleArn: 'STRING_VALUE', SecretsManagerOracleAsmAccessRoleArn: 'STRING_VALUE', SecretsManagerOracleAsmSecretId: 'STRING_VALUE', SecretsManagerSecretId: 'STRING_VALUE', SecurityDbEncryption: 'STRING_VALUE', SecurityDbEncryptionName: 'STRING_VALUE', ServerName: 'STRING_VALUE', SpatialDataOptionToGeoJsonFunctionName: 'STRING_VALUE', StandbyDelayTime: 'NUMBER_VALUE', UseAlternateFolderForOnline: true || false, UseBFile: true || false, UseDirectPathFullLoad: true || false, UseLogminerReader: true || false, UsePathPrefix: 'STRING_VALUE', Username: 'STRING_VALUE' }, Password: 'STRING_VALUE', Port: 'NUMBER_VALUE', PostgreSQLSettings: { AfterConnectScript: 'STRING_VALUE', CaptureDdls: true || false, DatabaseName: 'STRING_VALUE', DdlArtifactsSchema: 'STRING_VALUE', ExecuteTimeout: 'NUMBER_VALUE', FailTasksOnLobTruncation: true || false, HeartbeatEnable: true || false, HeartbeatFrequency: 'NUMBER_VALUE', HeartbeatSchema: 'STRING_VALUE', MaxFileSize: 'NUMBER_VALUE', Password: 'STRING_VALUE', PluginName: no-preference | test-decoding | pglogical, Port: 'NUMBER_VALUE', SecretsManagerAccessRoleArn: 'STRING_VALUE', SecretsManagerSecretId: 'STRING_VALUE', ServerName: 'STRING_VALUE', SlotName: 'STRING_VALUE', Username: 'STRING_VALUE' }, RedisSettings: { Port: 'NUMBER_VALUE', /* required */ ServerName: 'STRING_VALUE', /* required */ AuthPassword: 'STRING_VALUE', AuthType: none | auth-role | auth-token, AuthUserName: 'STRING_VALUE', SslCaCertificateArn: 'STRING_VALUE', SslSecurityProtocol: plaintext | ssl-encryption }, RedshiftSettings: { AcceptAnyDate: true || false, AfterConnectScript: 'STRING_VALUE', BucketFolder: 'STRING_VALUE', BucketName: 'STRING_VALUE', CaseSensitiveNames: true || false, CompUpdate: true || false, ConnectionTimeout: 'NUMBER_VALUE', DatabaseName: 'STRING_VALUE', DateFormat: 'STRING_VALUE', EmptyAsNull: true || false, EncryptionMode: sse-s3 | sse-kms, ExplicitIds: true || false, FileTransferUploadStreams: 'NUMBER_VALUE', LoadTimeout: 'NUMBER_VALUE', MaxFileSize: 'NUMBER_VALUE', Password: 'STRING_VALUE', Port: 'NUMBER_VALUE', RemoveQuotes: true || false, ReplaceChars: 'STRING_VALUE', ReplaceInvalidChars: 'STRING_VALUE', SecretsManagerAccessRoleArn: 'STRING_VALUE', SecretsManagerSecretId: 'STRING_VALUE', ServerName: 'STRING_VALUE', ServerSideEncryptionKmsKeyId: 'STRING_VALUE', ServiceAccessRoleArn: 'STRING_VALUE', TimeFormat: 'STRING_VALUE', TrimBlanks: true || false, TruncateColumns: true || false, Username: 'STRING_VALUE', WriteBufferSize: 'NUMBER_VALUE' }, ResourceIdentifier: 'STRING_VALUE', S3Settings: { AddColumnName: true || false, BucketFolder: 'STRING_VALUE', BucketName: 'STRING_VALUE', CannedAclForObjects: none | private | public-read | public-read-write | authenticated-read | aws-exec-read | bucket-owner-read | bucket-owner-full-control, CdcInsertsAndUpdates: true || false, CdcInsertsOnly: true || false, CdcMaxBatchInterval: 'NUMBER_VALUE', CdcMinFileSize: 'NUMBER_VALUE', CdcPath: 'STRING_VALUE', CompressionType: none | gzip, CsvDelimiter: 'STRING_VALUE', CsvNoSupValue: 'STRING_VALUE', CsvNullValue: 'STRING_VALUE', CsvRowDelimiter: 'STRING_VALUE', DataFormat: csv | parquet, DataPageSize: 'NUMBER_VALUE', DatePartitionDelimiter: SLASH | UNDERSCORE | DASH | NONE, DatePartitionEnabled: true || false, DatePartitionSequence: YYYYMMDD | YYYYMMDDHH | YYYYMM | MMYYYYDD | DDMMYYYY, DictPageSizeLimit: 'NUMBER_VALUE', EnableStatistics: true || false, EncodingType: plain | plain-dictionary | rle-dictionary, EncryptionMode: sse-s3 | sse-kms, ExternalTableDefinition: 'STRING_VALUE', IgnoreHeaderRows: 'NUMBER_VALUE', IncludeOpForFullLoad: true || false, MaxFileSize: 'NUMBER_VALUE', ParquetTimestampInMillisecond: true || false, ParquetVersion: parquet-1-0 | parquet-2-0, PreserveTransactions: true || false, Rfc4180: true || false, RowGroupLength: 'NUMBER_VALUE', ServerSideEncryptionKmsKeyId: 'STRING_VALUE', ServiceAccessRoleArn: 'STRING_VALUE', TimestampColumnName: 'STRING_VALUE', UseCsvNoSupValue: true || false }, ServerName: 'STRING_VALUE', ServiceAccessRoleArn: 'STRING_VALUE', SslMode: none | require | verify-ca | verify-full, SybaseSettings: { DatabaseName: 'STRING_VALUE', Password: 'STRING_VALUE', Port: 'NUMBER_VALUE', SecretsManagerAccessRoleArn: 'STRING_VALUE', SecretsManagerSecretId: 'STRING_VALUE', ServerName: 'STRING_VALUE', Username: 'STRING_VALUE' }, Tags: [ { Key: 'STRING_VALUE', ResourceArn: 'STRING_VALUE', Value: 'STRING_VALUE' }, /* more items */ ], Username: 'STRING_VALUE' }; dms.createEndpoint(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
EndpointIdentifier
— (String
)The database endpoint identifier. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen, or contain two consecutive hyphens.
EndpointType
— (String
)The type of endpoint. Valid values are
Possible values include:source
andtarget
."source"
"target"
EngineName
— (String
)The type of engine for the endpoint. Valid values, depending on the
EndpointType
value, include"mysql"
,"oracle"
,"postgres"
,"mariadb"
,"aurora"
,"aurora-postgresql"
,"redshift"
,"s3"
,"db2"
,"azuredb"
,"sybase"
,"dynamodb"
,"mongodb"
,"kinesis"
,"kafka"
,"elasticsearch"
,"docdb"
,"sqlserver"
, and"neptune"
.Username
— (String
)The user name to be used to log in to the endpoint database.
Password
— (String
)The password to be used to log in to the endpoint database.
ServerName
— (String
)The name of the server where the endpoint database resides.
Port
— (Integer
)The port used by the endpoint database.
DatabaseName
— (String
)The name of the endpoint database. For a MySQL source or target endpoint, do not specify DatabaseName.
ExtraConnectionAttributes
— (String
)Additional attributes associated with the connection. Each attribute is specified as a name-value pair associated by an equal sign (=). Multiple attributes are separated by a semicolon (;) with no additional white space. For information on the attributes available for connecting your source or target endpoint, see Working with DMS Endpoints in the Database Migration Service User Guide.
KmsKeyId
— (String
)An KMS key identifier that is used to encrypt the connection parameters for the endpoint.
If you don't specify a value for the
KmsKeyId
parameter, then DMS uses your default encryption key.KMS creates the default encryption key for your Amazon Web Services account. Your Amazon Web Services account has a different default encryption key for each Amazon Web Services Region.
Tags
— (Array<map>
)One or more tags to be assigned to the endpoint.
Key
— (String
)A key is the required name of the tag. The string value can be 1-128 Unicode characters in length and can't be prefixed with "aws:" or "dms:". The string can only contain only the set of Unicode letters, digits, white-space, '', '.', '/', '=', '+', '-' (Java regular expressions: "^([\p
{L}\\p{Z}\\p{N}
.:/=+\-]*)$").Value
— (String
)A value is the optional value of the tag. The string value can be 1-256 Unicode characters in length and can't be prefixed with "aws:" or "dms:". The string can only contain only the set of Unicode letters, digits, white-space, '', '.', '/', '=', '+', '-' (Java regular expressions: "^([\p
{L}\\p{Z}\\p{N}
.:/=+\-]*)$").ResourceArn
— (String
)The Amazon Resource Name (ARN) string that uniquely identifies the resource for which the tag is created.
CertificateArn
— (String
)The Amazon Resource Name (ARN) for the certificate.
SslMode
— (String
)The Secure Sockets Layer (SSL) mode to use for the SSL connection. The default is
Possible values include:none
"none"
"require"
"verify-ca"
"verify-full"
ServiceAccessRoleArn
— (String
)The Amazon Resource Name (ARN) for the service access role that you want to use to create the endpoint. The role must allow the
iam:PassRole
action.ExternalTableDefinition
— (String
)The external table definition.
DynamoDbSettings
— (map
)Settings in JSON format for the target Amazon DynamoDB endpoint. For information about other available settings, see Using Object Mapping to Migrate Data to DynamoDB in the Database Migration Service User Guide.
ServiceAccessRoleArn
— required — (String
)The Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the
iam:PassRole
action.
S3Settings
— (map
)Settings in JSON format for the target Amazon S3 endpoint. For more information about the available settings, see Extra Connection Attributes When Using Amazon S3 as a Target for DMS in the Database Migration Service User Guide.
ServiceAccessRoleArn
— (String
)The Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the
iam:PassRole
action. It is a required parameter that enables DMS to write and read objects from an S3 bucket.ExternalTableDefinition
— (String
)Specifies how tables are defined in the S3 source files only.
CsvRowDelimiter
— (String
)The delimiter used to separate rows in the .csv file for both source and target. The default is a carriage return (
\n
).CsvDelimiter
— (String
)The delimiter used to separate columns in the .csv file for both source and target. The default is a comma.
BucketFolder
— (String
)An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path
bucketFolder/schema_name/table_name/
. If this parameter isn't specified, then the path used isschema_name/table_name/
.BucketName
— (String
)The name of the S3 bucket.
CompressionType
— (String
)An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed. This parameter applies to both .csv and .parquet file formats.
Possible values include:"none"
"gzip"
EncryptionMode
— (String
)The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either
SSE_S3
(the default) orSSE_KMS
.Note: For theModifyEndpoint
operation, you can change the existing value of theEncryptionMode
parameter fromSSE_KMS
toSSE_S3
. But you can’t change the existing value fromSSE_S3
toSSE_KMS
.To use
SSE_S3
, you need an Identity and Access Management (IAM) role with permission to allow"arn:aws:s3:::dms-*"
to use the following actions:-
s3:CreateBucket
-
s3:ListBucket
-
s3:DeleteBucket
-
s3:GetBucketLocation
-
s3:GetObject
-
s3:PutObject
-
s3:DeleteObject
-
s3:GetObjectVersion
-
s3:GetBucketPolicy
-
s3:PutBucketPolicy
-
s3:DeleteBucketPolicy
"sse-s3"
"sse-kms"
-
ServerSideEncryptionKmsKeyId
— (String
)If you are using
SSE_KMS
for theEncryptionMode
, provide the KMS key ID. The key that you use needs an attached policy that enables Identity and Access Management (IAM) user permissions and allows use of the key.Here is a CLI example:
aws dms create-endpoint --endpoint-identifier value --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=value,BucketFolder=value,BucketName=value,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=value
DataFormat
— (String
)The format of the data that you want to use for output. You can choose one of the following:
-
csv
: This is a row-based file format with comma-separated values (.csv). -
parquet
: Apache Parquet (.parquet) is a columnar storage file format that features efficient compression and provides faster query response.
"csv"
"parquet"
-
EncodingType
— (String
)The type of encoding you are using:
-
RLE_DICTIONARY
uses a combination of bit-packing and run-length encoding to store repeated values more efficiently. This is the default. -
PLAIN
doesn't use encoding at all. Values are stored as they are. -
PLAIN_DICTIONARY
builds a dictionary of the values encountered in a given column. The dictionary is stored in a dictionary page for each column chunk.
"plain"
"plain-dictionary"
"rle-dictionary"
-
DictPageSizeLimit
— (Integer
)The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of
PLAIN
. This parameter defaults to 1024 * 1024 bytes (1 MiB), the maximum size of a dictionary page before it reverts toPLAIN
encoding. This size is used for .parquet file format only.RowGroupLength
— (Integer
)The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used for .parquet file format only.
If you choose a value larger than the maximum,
RowGroupLength
is set to the max row group length in bytes (64 * 1024 * 1024).DataPageSize
— (Integer
)The size of one data page in bytes. This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is used for .parquet file format only.
ParquetVersion
— (String
)The version of the Apache Parquet format that you want to use:
Possible values include:parquet_1_0
(the default) orparquet_2_0
."parquet-1-0"
"parquet-2-0"
EnableStatistics
— (Boolean
)A value that enables statistics for Parquet pages and row groups. Choose
true
to enable statistics,false
to disable. Statistics includeNULL
,DISTINCT
,MAX
, andMIN
values. This parameter defaults totrue
. This value is used for .parquet file format only.IncludeOpForFullLoad
— (Boolean
)A value that enables a full load to write INSERT operations to the comma-separated value (.csv) output files only to indicate how the rows were added to the source database.
Note: DMS supports theIncludeOpForFullLoad
parameter in versions 3.1.4 and later.For full load, records can only be inserted. By default (the
false
setting), no information is recorded in these output files for a full load to indicate that the rows were inserted at the source database. IfIncludeOpForFullLoad
is set totrue
ory
, the INSERT is recorded as an I annotation in the first field of the .csv file. This allows the format of your target records from a full load to be consistent with the target records from a CDC load.Note: This setting works together with theCdcInsertsOnly
and theCdcInsertsAndUpdates
parameters for output to .csv files only. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..CdcInsertsOnly
— (Boolean
)A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage (.parquet) output files. By default (the
false
setting), the first field in a .csv or .parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was inserted, updated, or deleted at the source database for a CDC load to the target.If
CdcInsertsOnly
is set totrue
ory
, only INSERTs from the source database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded depends on the value ofIncludeOpForFullLoad
. IfIncludeOpForFullLoad
is set totrue
, the first field of every CDC record is set to I to indicate the INSERT operation at the source. IfIncludeOpForFullLoad
is set tofalse
, every CDC record is written without a first field to indicate the INSERT operation at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..Note: DMS supports the interaction described preceding between theCdcInsertsOnly
andIncludeOpForFullLoad
parameters in versions 3.1.4 and later.CdcInsertsOnly
andCdcInsertsAndUpdates
can't both be set totrue
for the same endpoint. Set eitherCdcInsertsOnly
orCdcInsertsAndUpdates
totrue
for the same endpoint, but not both.TimestampColumnName
— (String
)A value that when nonblank causes DMS to add a column with timestamp information to the endpoint data for an Amazon S3 target.
Note: DMS supports theTimestampColumnName
parameter in versions 3.1.4 and later.DMS includes an additional
STRING
column in the .csv or .parquet object files of your migrated data when you setTimestampColumnName
to a nonblank value.For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS.
For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.
The string format for this timestamp column value is
yyyy-MM-dd HH:mm:ss.SSSSSS
. By default, the precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on the commit timestamp supported by DMS for the source database.When the
AddColumnName
parameter is set totrue
, DMS also includes a name for the timestamp column that you set withTimestampColumnName
.ParquetTimestampInMillisecond
— (Boolean
)A value that specifies the precision of any
TIMESTAMP
column values that are written to an Amazon S3 object file in .parquet format.Note: DMS supports theParquetTimestampInMillisecond
parameter in versions 3.1.4 and later.When
ParquetTimestampInMillisecond
is set totrue
ory
, DMS writes allTIMESTAMP
columns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes them with microsecond precision.Currently, Amazon Athena and Glue can handle only millisecond precision for
TIMESTAMP
values. Set this parameter totrue
for S3 endpoint object files that are .parquet formatted only if you plan to query or process the data with Athena or Glue.Note: DMS writes anyTIMESTAMP
column values written to an S3 file in .csv format with microsecond precision. SettingParquetTimestampInMillisecond
has no effect on the string format of the timestamp column value that is inserted by setting theTimestampColumnName
parameter.CdcInsertsAndUpdates
— (Boolean
)A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet (columnar storage) output files. The default setting is
false
, but whenCdcInsertsAndUpdates
is set totrue
ory
, only INSERTs and UPDATEs from the source database are migrated to the .csv or .parquet file.For .csv file format only, how these INSERTs and UPDATEs are recorded depends on the value of the
IncludeOpForFullLoad
parameter. IfIncludeOpForFullLoad
is set totrue
, the first field of every CDC record is set to eitherI
orU
to indicate INSERT and UPDATE operations at the source. But ifIncludeOpForFullLoad
is set tofalse
, CDC records are written without an indication of INSERT or UPDATE operations at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..Note: DMS supports the use of theCdcInsertsAndUpdates
parameter in versions 3.3.1 and later.CdcInsertsOnly
andCdcInsertsAndUpdates
can't both be set totrue
for the same endpoint. Set eitherCdcInsertsOnly
orCdcInsertsAndUpdates
totrue
for the same endpoint, but not both.DatePartitionEnabled
— (Boolean
)When set to
true
, this parameter partitions S3 bucket folders based on transaction commit dates. The default value isfalse
. For more information about date-based folder partitioning, see Using date-based folder partitioning.DatePartitionSequence
— (String
)Identifies the sequence of the date format to use during folder partitioning. The default value is
Possible values include:YYYYMMDD
. Use this parameter whenDatePartitionedEnabled
is set totrue
."YYYYMMDD"
"YYYYMMDDHH"
"YYYYMM"
"MMYYYYDD"
"DDMMYYYY"
DatePartitionDelimiter
— (String
)Specifies a date separating delimiter to use during folder partitioning. The default value is
Possible values include:SLASH
. Use this parameter whenDatePartitionedEnabled
is set totrue
."SLASH"
"UNDERSCORE"
"DASH"
"NONE"
UseCsvNoSupValue
— (Boolean
)This setting applies if the S3 output files during a change data capture (CDC) load are written in .csv format. If set to
true
for columns not included in the supplemental log, DMS uses the value specified byCsvNoSupValue
. If not set or set tofalse
, DMS uses the null value for these columns.Note: This setting is supported in DMS versions 3.4.1 and later.CsvNoSupValue
— (String
)This setting only applies if your Amazon S3 output files during a change data capture (CDC) load are written in .csv format. If
UseCsvNoSupValue
is set to true, specify a string value that you want DMS to use for all columns not included in the supplemental log. If you do not specify a string value, DMS uses the null value for these columns regardless of theUseCsvNoSupValue
setting.Note: This setting is supported in DMS versions 3.4.1 and later.PreserveTransactions
— (Boolean
)If set to
true
, DMS saves the transaction order for a change data capture (CDC) load on the Amazon S3 target specified byCdcPath
. For more information, see Capturing data changes (CDC) including transaction order on the S3 target.Note: This setting is supported in DMS versions 3.4.2 and later.CdcPath
— (String
)Specifies the folder path of CDC files. For an S3 source, this setting is required if a task captures change data; otherwise, it's optional. If
CdcPath
is set, DMS reads CDC files from this path and replicates the data changes to the target endpoint. For an S3 target if you setPreserveTransactions
totrue
, DMS verifies that you have set this parameter to a folder path on your S3 target where DMS can save the transaction order for the CDC load. DMS creates this CDC folder path in either your S3 target working directory or the S3 target location specified byBucketFolder
andBucketName
.For example, if you specify
CdcPath
asMyChangedData
, and you specifyBucketName
asMyTargetBucket
but do not specifyBucketFolder
, DMS creates the CDC folder path following:MyTargetBucket/MyChangedData
.If you specify the same
CdcPath
, and you specifyBucketName
asMyTargetBucket
andBucketFolder
asMyTargetData
, DMS creates the CDC folder path following:MyTargetBucket/MyTargetData/MyChangedData
.For more information on CDC including transaction order on an S3 target, see Capturing data changes (CDC) including transaction order on the S3 target.
Note: This setting is supported in DMS versions 3.4.2 and later.CannedAclForObjects
— (String
)A value that enables DMS to specify a predefined (canned) access control list for objects created in an Amazon S3 bucket as .csv or .parquet files. For more information about Amazon S3 canned ACLs, see Canned ACL in the Amazon S3 Developer Guide.
The default value is NONE. Valid values include NONE, PRIVATE, PUBLIC_READ, PUBLIC_READ_WRITE, AUTHENTICATED_READ, AWS_EXEC_READ, BUCKET_OWNER_READ, and BUCKET_OWNER_FULL_CONTROL.
Possible values include:"none"
"private"
"public-read"
"public-read-write"
"authenticated-read"
"aws-exec-read"
"bucket-owner-read"
"bucket-owner-full-control"
AddColumnName
— (Boolean
)An optional parameter that, when set to
true
ory
, you can use to add column name information to the .csv output file.The default value is
false
. Valid values aretrue
,false
,y
, andn
.CdcMaxBatchInterval
— (Integer
)Maximum length of the interval, defined in seconds, after which to output a file to Amazon S3.
When
CdcMaxBatchInterval
andCdcMinFileSize
are both specified, the file write is triggered by whichever parameter condition is met first within an DMS CloudFormation template.The default value is 60 seconds.
CdcMinFileSize
— (Integer
)Minimum file size, defined in megabytes, to reach for a file output to Amazon S3.
When
CdcMinFileSize
andCdcMaxBatchInterval
are both specified, the file write is triggered by whichever parameter condition is met first within an DMS CloudFormation template.The default value is 32 MB.
CsvNullValue
— (String
)An optional parameter that specifies how DMS treats null values. While handling the null value, you can use this parameter to pass a user-defined string as null when writing to the target. For example, when target columns are not nullable, you can use this option to differentiate between the empty string value and the null value. So, if you set this parameter value to the empty string ("" or ''), DMS treats the empty string as the null value instead of
NULL
.The default value is
NULL
. Valid values include any valid string.IgnoreHeaderRows
— (Integer
)When this value is set to 1, DMS ignores the first row header in a .csv file. A value of 1 turns on the feature; a value of 0 turns off the feature.
The default is 0.
MaxFileSize
— (Integer
)A value that specifies the maximum size (in KB) of any .csv file to be created while migrating to an S3 target during full load.
The default value is 1,048,576 KB (1 GB). Valid values include 1 to 1,048,576.
Rfc4180
— (Boolean
)For an S3 source, when this value is set to
true
ory
, each leading double quotation mark has to be followed by an ending double quotation mark. This formatting complies with RFC 4180. When this value is set tofalse
orn
, string literals are copied to the target as is. In this case, a delimiter (row or column) signals the end of the field. Thus, you can't use a delimiter as part of the string, because it signals the end of the value.For an S3 target, an optional parameter used to set behavior to comply with RFC 4180 for data migrated to Amazon S3 using .csv file format only. When this value is set to
true
ory
using Amazon S3 as a target, if the data has quotation marks or newline characters in it, DMS encloses the entire column with an additional pair of double quotation marks ("). Every quotation mark within the data is repeated twice.The default value is
true
. Valid values includetrue
,false
,y
, andn
.
DmsTransferSettings
— (map
)The settings in JSON format for the DMS transfer type of source endpoint.
Possible settings include the following:
-
ServiceAccessRoleArn
- The Amazon Resource Name (ARN) used by the service access IAM role. The role must allow theiam:PassRole
action. -
BucketName
- The name of the S3 bucket to use.
Shorthand syntax for these settings is as follows:
ServiceAccessRoleArn=string,BucketName=string
JSON syntax for these settings is as follows:
{ "ServiceAccessRoleArn": "string", "BucketName": "string", }
ServiceAccessRoleArn
— (String
)The Amazon Resource Name (ARN) used by the service access IAM role. The role must allow the
iam:PassRole
action.BucketName
— (String
)The name of the S3 bucket to use.
-
MongoDbSettings
— (map
)Settings in JSON format for the source MongoDB endpoint. For more information about the available settings, see Endpoint configuration settings when using MongoDB as a source for Database Migration Service in the Database Migration Service User Guide.
Username
— (String
)The user name you use to access the MongoDB source endpoint.
Password
— (String
)The password for the user account you use to access the MongoDB source endpoint.
ServerName
— (String
)The name of the server on the MongoDB source endpoint.
Port
— (Integer
)The port value for the MongoDB source endpoint.
DatabaseName
— (String
)The database name on the MongoDB source endpoint.
AuthType
— (String
)The authentication type you use to access the MongoDB source endpoint.
When when set to
Possible values include:"no"
, user name and password parameters are not used and can be empty."no"
"password"
AuthMechanism
— (String
)The authentication mechanism you use to access the MongoDB source endpoint.
For the default value, in MongoDB version 2.x,
Possible values include:"default"
is"mongodb_cr"
. For MongoDB version 3.x or later,"default"
is"scram_sha_1"
. This setting isn't used whenAuthType
is set to"no"
."default"
"mongodb_cr"
"scram_sha_1"
NestingLevel
— (String
)Specifies either document or table mode.
Default value is
Possible values include:"none"
. Specify"none"
to use document mode. Specify"one"
to use table mode."none"
"one"
ExtractDocId
— (String
)Specifies the document ID. Use this setting when
NestingLevel
is set to"none"
.Default value is
"false"
.DocsToInvestigate
— (String
)Indicates the number of documents to preview to determine the document organization. Use this setting when
NestingLevel
is set to"one"
.Must be a positive value greater than
0
. Default value is1000
.AuthSource
— (String
)The MongoDB database name. This setting isn't used when
AuthType
is set to"no"
.The default is
"admin"
.KmsKeyId
— (String
)The KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the
KmsKeyId
parameter, then DMS uses your default encryption key. KMS creates the default encryption key for your Amazon Web Services account. Your Amazon Web Services account has a different default encryption key for each Amazon Web Services Region.SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the MongoDB endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the MongoDB endpoint connection details.
KinesisSettings
— (map
)Settings in JSON format for the target endpoint for Amazon Kinesis Data Streams. For more information about the available settings, see Using object mapping to migrate data to a Kinesis data stream in the Database Migration Service User Guide.
StreamArn
— (String
)The Amazon Resource Name (ARN) for the Amazon Kinesis Data Streams endpoint.
MessageFormat
— (String
)The output format for the records created on the endpoint. The message format is
Possible values include:JSON
(default) orJSON_UNFORMATTED
(a single line with no tab)."json"
"json-unformatted"
ServiceAccessRoleArn
— (String
)The Amazon Resource Name (ARN) for the IAM role that DMS uses to write to the Kinesis data stream. The role must allow the
iam:PassRole
action.IncludeTransactionDetails
— (Boolean
)Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for
transaction_id
, previoustransaction_id
, andtransaction_record_id
(the record offset within a transaction). The default isfalse
.IncludePartitionValue
— (Boolean
)Shows the partition value within the Kinesis message output, unless the partition type is
schema-table-type
. The default isfalse
.PartitionIncludeSchemaTable
— (Boolean
)Prefixes schema and table names to partition values, when the partition type is
primary-key-type
. Doing this increases data distribution among Kinesis shards. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same shard, which causes throttling. The default isfalse
.IncludeTableAlterOperations
— (Boolean
)Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table
,drop-table
,add-column
,drop-column
, andrename-column
. The default isfalse
.IncludeControlDetails
— (Boolean
)Shows detailed control information for table definition, column definition, and table and column changes in the Kinesis message output. The default is
false
.IncludeNullAndEmpty
— (Boolean
)Include NULL and empty columns for records migrated to the endpoint. The default is
false
.NoHexPrefix
— (Boolean
)Set this optional parameter to
true
to avoid adding a '0x' prefix to raw data in hexadecimal format. For example, by default, DMS adds a '0x' prefix to the LOB column type in hexadecimal format moving from an Oracle source to an Amazon Kinesis target. Use theNoHexPrefix
endpoint setting to enable migration of RAW data type columns without adding the '0x' prefix.
KafkaSettings
— (map
)Settings in JSON format for the target Apache Kafka endpoint. For more information about the available settings, see Using object mapping to migrate data to a Kafka topic in the Database Migration Service User Guide.
Broker
— (String
)A comma-separated list of one or more broker locations in your Kafka cluster that host your Kafka instance. Specify each broker location in the form
broker-hostname-or-ip:port
. For example,"ec2-12-345-678-901.compute-1.amazonaws.com:2345"
. For more information and examples of specifying a list of broker locations, see Using Apache Kafka as a target for Database Migration Service in the Database Migration Service User Guide.Topic
— (String
)The topic to which you migrate the data. If you don't specify a topic, DMS specifies
"kafka-default-topic"
as the migration topic.MessageFormat
— (String
)The output format for the records created on the endpoint. The message format is
Possible values include:JSON
(default) orJSON_UNFORMATTED
(a single line with no tab)."json"
"json-unformatted"
IncludeTransactionDetails
— (Boolean
)Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for
transaction_id
, previoustransaction_id
, andtransaction_record_id
(the record offset within a transaction). The default isfalse
.IncludePartitionValue
— (Boolean
)Shows the partition value within the Kafka message output unless the partition type is
schema-table-type
. The default isfalse
.PartitionIncludeSchemaTable
— (Boolean
)Prefixes schema and table names to partition values, when the partition type is
primary-key-type
. Doing this increases data distribution among Kafka partitions. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same partition, which causes throttling. The default isfalse
.IncludeTableAlterOperations
— (Boolean
)Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table
,drop-table
,add-column
,drop-column
, andrename-column
. The default isfalse
.IncludeControlDetails
— (Boolean
)Shows detailed control information for table definition, column definition, and table and column changes in the Kafka message output. The default is
false
.MessageMaxBytes
— (Integer
)The maximum size in bytes for records created on the endpoint The default is 1,000,000.
IncludeNullAndEmpty
— (Boolean
)Include NULL and empty columns for records migrated to the endpoint. The default is
false
.SecurityProtocol
— (String
)Set secure connection to a Kafka target endpoint using Transport Layer Security (TLS). Options include
Possible values include:ssl-encryption
,ssl-authentication
, andsasl-ssl
.sasl-ssl
requiresSaslUsername
andSaslPassword
."plaintext"
"ssl-authentication"
"ssl-encryption"
"sasl-ssl"
SslClientCertificateArn
— (String
)The Amazon Resource Name (ARN) of the client certificate used to securely connect to a Kafka target endpoint.
SslClientKeyArn
— (String
)The Amazon Resource Name (ARN) for the client private key used to securely connect to a Kafka target endpoint.
SslClientKeyPassword
— (String
)The password for the client private key used to securely connect to a Kafka target endpoint.
SslCaCertificateArn
— (String
)The Amazon Resource Name (ARN) for the private certificate authority (CA) cert that DMS uses to securely connect to your Kafka target endpoint.
SaslUsername
— (String
)The secure user name you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.
SaslPassword
— (String
)The secure password you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.
NoHexPrefix
— (Boolean
)Set this optional parameter to
true
to avoid adding a '0x' prefix to raw data in hexadecimal format. For example, by default, DMS adds a '0x' prefix to the LOB column type in hexadecimal format moving from an Oracle source to a Kafka target. Use theNoHexPrefix
endpoint setting to enable migration of RAW data type columns without adding the '0x' prefix.
ElasticsearchSettings
— (map
)Settings in JSON format for the target Elasticsearch endpoint. For more information about the available settings, see Extra Connection Attributes When Using Elasticsearch as a Target for DMS in the Database Migration Service User Guide.
ServiceAccessRoleArn
— required — (String
)The Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the
iam:PassRole
action.EndpointUri
— required — (String
)The endpoint for the Elasticsearch cluster. DMS uses HTTPS if a transport protocol (http/https) is not specified.
FullLoadErrorPercentage
— (Integer
)The maximum percentage of records that can fail to be written before a full load operation stops.
To avoid early failure, this counter is only effective after 1000 records are transferred. Elasticsearch also has the concept of error monitoring during the last 10 minutes of an Observation Window. If transfer of all records fail in the last 10 minutes, the full load operation stops.
ErrorRetryDuration
— (Integer
)The maximum number of seconds for which DMS retries failed API requests to the Elasticsearch cluster.
NeptuneSettings
— (map
)Settings in JSON format for the target Amazon Neptune endpoint. For more information about the available settings, see Specifying graph-mapping rules using Gremlin and R2RML for Amazon Neptune as a target in the Database Migration Service User Guide.
ServiceAccessRoleArn
— (String
)The Amazon Resource Name (ARN) of the service role that you created for the Neptune target endpoint. The role must allow the
iam:PassRole
action. For more information, see Creating an IAM Service Role for Accessing Amazon Neptune as a Target in the Database Migration Service User Guide.S3BucketName
— required — (String
)The name of the Amazon S3 bucket where DMS can temporarily store migrated graph data in .csv files before bulk-loading it to the Neptune target database. DMS maps the SQL source data to graph data before storing it in these .csv files.
S3BucketFolder
— required — (String
)A folder path where you want DMS to store migrated graph data in the S3 bucket specified by
S3BucketName
ErrorRetryDuration
— (Integer
)The number of milliseconds for DMS to wait to retry a bulk-load of migrated graph data to the Neptune target database before raising an error. The default is 250.
MaxFileSize
— (Integer
)The maximum size in kilobytes of migrated graph data stored in a .csv file before DMS bulk-loads the data to the Neptune target database. The default is 1,048,576 KB. If the bulk load is successful, DMS clears the bucket, ready to store the next batch of migrated graph data.
MaxRetryCount
— (Integer
)The number of times for DMS to retry a bulk load of migrated graph data to the Neptune target database before raising an error. The default is 5.
IamAuthEnabled
— (Boolean
)If you want Identity and Access Management (IAM) authorization enabled for this endpoint, set this parameter to
true
. Then attach the appropriate IAM policy document to your service role specified byServiceAccessRoleArn
. The default isfalse
.
RedshiftSettings
— (map
)Provides information that defines an Amazon Redshift endpoint.
AcceptAnyDate
— (Boolean
)A value that indicates to allow any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error. You can choose
true
orfalse
(the default).This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data doesn't match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.
AfterConnectScript
— (String
)Code to run after connecting. This parameter should contain the code itself, not the name of a file containing the code.
BucketFolder
— (String
)An S3 folder where the comma-separated-value (.csv) files are stored before being uploaded to the target Redshift cluster.
For full load mode, DMS converts source records into .csv files and loads them to the BucketFolder/TableID path. DMS uses the Redshift
COPY
command to upload the .csv files to the target table. The files are deleted once theCOPY
operation has finished. For more information, see COPY in the Amazon Redshift Database Developer Guide.For change-data-capture (CDC) mode, DMS creates a NetChanges table, and loads the .csv files to this BucketFolder/NetChangesTableID path.
BucketName
— (String
)The name of the intermediate S3 bucket used to store .csv files before uploading data to Redshift.
CaseSensitiveNames
— (Boolean
)If Amazon Redshift is configured to support case sensitive schema names, set
CaseSensitiveNames
totrue
. The default isfalse
.CompUpdate
— (Boolean
)If you set
CompUpdate
totrue
Amazon Redshift applies automatic compression if the table is empty. This applies even if the table columns already have encodings other thanRAW
. If you setCompUpdate
tofalse
, automatic compression is disabled and existing column encodings aren't changed. The default istrue
.ConnectionTimeout
— (Integer
)A value that sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.
DatabaseName
— (String
)The name of the Amazon Redshift data warehouse (service) that you are working with.
DateFormat
— (String
)The date format that you are using. Valid values are
auto
(case-sensitive), your date format string enclosed in quotes, or NULL. If this parameter is left unset (NULL), it defaults to a format of 'YYYY-MM-DD'. Usingauto
recognizes most strings, even some that aren't supported when you use a date format string.If your date and time values use formats different from each other, set this to
auto
.EmptyAsNull
— (Boolean
)A value that specifies whether DMS should migrate empty CHAR and VARCHAR fields as NULL. A value of
true
sets empty CHAR and VARCHAR fields to null. The default isfalse
.EncryptionMode
— (String
)The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either
SSE_S3
(the default) orSSE_KMS
.Note: For theModifyEndpoint
operation, you can change the existing value of theEncryptionMode
parameter fromSSE_KMS
toSSE_S3
. But you can’t change the existing value fromSSE_S3
toSSE_KMS
.To use
Possible values include:SSE_S3
, create an Identity and Access Management (IAM) role with a policy that allows"arn:aws:s3:::*"
to use the following actions:"s3:PutObject", "s3:ListBucket"
"sse-s3"
"sse-kms"
ExplicitIds
— (Boolean
)This setting is only valid for a full-load migration task. Set
ExplicitIds
totrue
to have tables withIDENTITY
columns override their auto-generated values with explicit values loaded from the source data files used to populate the tables. The default isfalse
.FileTransferUploadStreams
— (Integer
)The number of threads used to upload a single file. This parameter accepts a value from 1 through 64. It defaults to 10.
The number of parallel streams used to upload a single .csv file to an S3 bucket using S3 Multipart Upload. For more information, see Multipart upload overview.
FileTransferUploadStreams
accepts a value from 1 through 64. It defaults to 10.LoadTimeout
— (Integer
)The amount of time to wait (in milliseconds) before timing out of operations performed by DMS on a Redshift cluster, such as Redshift COPY, INSERT, DELETE, and UPDATE.
MaxFileSize
— (Integer
)The maximum size (in KB) of any .csv file used to load data on an S3 bucket and transfer data to Amazon Redshift. It defaults to 1048576KB (1 GB).
Password
— (String
)The password for the user named in the
username
property.Port
— (Integer
)The port number for Amazon Redshift. The default value is 5439.
RemoveQuotes
— (Boolean
)A value that specifies to remove surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained. Choose
true
to remove quotation marks. The default isfalse
.ReplaceInvalidChars
— (String
)A list of characters that you want to replace. Use with
ReplaceChars
.ReplaceChars
— (String
)A value that specifies to replaces the invalid characters specified in
ReplaceInvalidChars
, substituting the specified characters instead. The default is"?"
.ServerName
— (String
)The name of the Amazon Redshift cluster you are using.
ServiceAccessRoleArn
— (String
)The Amazon Resource Name (ARN) of the IAM role that has access to the Amazon Redshift service. The role must allow the
iam:PassRole
action.ServerSideEncryptionKmsKeyId
— (String
)The KMS key ID. If you are using
SSE_KMS
for theEncryptionMode
, provide this key ID. The key that you use needs an attached policy that enables IAM user permissions and allows use of the key.TimeFormat
— (String
)The time format that you want to use. Valid values are
auto
(case-sensitive),'timeformat_string'
,'epochsecs'
, or'epochmillisecs'
. It defaults to 10. Usingauto
recognizes most strings, even some that aren't supported when you use a time format string.If your date and time values use formats different from each other, set this parameter to
auto
.TrimBlanks
— (Boolean
)A value that specifies to remove the trailing white space characters from a VARCHAR string. This parameter applies only to columns with a VARCHAR data type. Choose
true
to remove unneeded white space. The default isfalse
.TruncateColumns
— (Boolean
)A value that specifies to truncate data in columns to the appropriate number of characters, so that the data fits in the column. This parameter applies only to columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. Choose
true
to truncate data. The default isfalse
.Username
— (String
)An Amazon Redshift user name for a registered user.
WriteBufferSize
— (Integer
)The size (in KB) of the in-memory file write buffer used when generating .csv files on the local disk at the DMS replication instance. The default value is 1000 (buffer size is 1000KB).
SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the Amazon Redshift endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the Amazon Redshift endpoint connection details.
PostgreSQLSettings
— (map
)Settings in JSON format for the source and target PostgreSQL endpoint. For information about other available settings, see Extra connection attributes when using PostgreSQL as a source for DMS and Extra connection attributes when using PostgreSQL as a target for DMS in the Database Migration Service User Guide.
AfterConnectScript
— (String
)For use with change data capture (CDC) only, this attribute has DMS bypass foreign keys and user triggers to reduce the time it takes to bulk load data.
Example:
afterConnectScript=SET session_replication_role='replica'
CaptureDdls
— (Boolean
)To capture DDL events, DMS creates various artifacts in the PostgreSQL database when the task starts. You can later remove these artifacts.
If this value is set to
N
, you don't have to create tables or triggers on the source database.MaxFileSize
— (Integer
)Specifies the maximum size (in KB) of any .csv file used to transfer data to PostgreSQL.
Example:
maxFileSize=512
DatabaseName
— (String
)Database name for the endpoint.
DdlArtifactsSchema
— (String
)The schema in which the operational DDL database artifacts are created.
Example:
ddlArtifactsSchema=xyzddlschema;
ExecuteTimeout
— (Integer
)Sets the client statement timeout for the PostgreSQL instance, in seconds. The default value is 60 seconds.
Example:
executeTimeout=100;
FailTasksOnLobTruncation
— (Boolean
)When set to
true
, this value causes a task to fail if the actual size of a LOB column is greater than the specifiedLobMaxSize
.If task is set to Limited LOB mode and this option is set to true, the task fails instead of truncating the LOB data.
HeartbeatEnable
— (Boolean
)The write-ahead log (WAL) heartbeat feature mimics a dummy transaction. By doing this, it prevents idle logical replication slots from holding onto old WAL logs, which can result in storage full situations on the source. This heartbeat keeps
restart_lsn
moving and prevents storage full scenarios.HeartbeatSchema
— (String
)Sets the schema in which the heartbeat artifacts are created.
HeartbeatFrequency
— (Integer
)Sets the WAL heartbeat frequency (in minutes).
Password
— (String
)Endpoint connection password.
Port
— (Integer
)Endpoint TCP port.
ServerName
— (String
)Fully qualified domain name of the endpoint.
Username
— (String
)Endpoint connection user name.
SlotName
— (String
)Sets the name of a previously created logical replication slot for a change data capture (CDC) load of the PostgreSQL source instance.
When used with the
CdcStartPosition
request parameter for the DMS API , this attribute also makes it possible to use native CDC start points. DMS verifies that the specified logical replication slot exists before starting the CDC load task. It also verifies that the task was created with a valid setting ofCdcStartPosition
. If the specified slot doesn't exist or the task doesn't have a validCdcStartPosition
setting, DMS raises an error.For more information about setting the
CdcStartPosition
request parameter, see Determining a CDC native start point in the Database Migration Service User Guide. For more information about usingCdcStartPosition
, see CreateReplicationTask, StartReplicationTask, and ModifyReplicationTask.PluginName
— (String
)Specifies the plugin to use to create a replication slot.
Possible values include:"no-preference"
"test-decoding"
"pglogical"
SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the PostgreSQL endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the PostgreSQL endpoint connection details.
MySQLSettings
— (map
)Settings in JSON format for the source and target MySQL endpoint. For information about other available settings, see Extra connection attributes when using MySQL as a source for DMS and Extra connection attributes when using a MySQL-compatible database as a target for DMS in the Database Migration Service User Guide.
AfterConnectScript
— (String
)Specifies a script to run immediately after DMS connects to the endpoint. The migration task continues running regardless if the SQL statement succeeds or fails.
For this parameter, provide the code of the script itself, not the name of a file containing the script.
CleanSourceMetadataOnMismatch
— (Boolean
)Adjusts the behavior of DMS when migrating from an SQL Server source database that is hosted as part of an Always On availability group cluster. If you need DMS to poll all the nodes in the Always On cluster for transaction backups, set this attribute to
false
.DatabaseName
— (String
)Database name for the endpoint. For a MySQL source or target endpoint, don't explicitly specify the database using the
DatabaseName
request parameter on either theCreateEndpoint
orModifyEndpoint
API call. SpecifyingDatabaseName
when you create or modify a MySQL endpoint replicates all the task tables to this single database. For MySQL endpoints, you specify the database only when you specify the schema in the table-mapping rules of the DMS task.EventsPollInterval
— (Integer
)Specifies how often to check the binary log for new changes/events when the database is idle.
Example:
eventsPollInterval=5;
In the example, DMS checks for changes in the binary logs every five seconds.
TargetDbType
— (String
)Specifies where to migrate source tables on the target, either to a single database or multiple databases.
Example:
Possible values include:targetDbType=MULTIPLE_DATABASES
"specific-database"
"multiple-databases"
MaxFileSize
— (Integer
)Specifies the maximum size (in KB) of any .csv file used to transfer data to a MySQL-compatible database.
Example:
maxFileSize=512
ParallelLoadThreads
— (Integer
)Improves performance when loading data into the MySQL-compatible target database. Specifies how many threads to use to load the data into the MySQL-compatible target database. Setting a large number of threads can have an adverse effect on database performance, because a separate connection is required for each thread.
Example:
parallelLoadThreads=1
Password
— (String
)Endpoint connection password.
Port
— (Integer
)Endpoint TCP port.
ServerName
— (String
)Fully qualified domain name of the endpoint.
ServerTimezone
— (String
)Specifies the time zone for the source MySQL database.
Example:
serverTimezone=US/Pacific;
Note: Do not enclose time zones in single quotes.
Username
— (String
)Endpoint connection user name.
SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the MySQL endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the MySQL endpoint connection details.
OracleSettings
— (map
)Settings in JSON format for the source and target Oracle endpoint. For information about other available settings, see Extra connection attributes when using Oracle as a source for DMS and Extra connection attributes when using Oracle as a target for DMS in the Database Migration Service User Guide.
AddSupplementalLogging
— (Boolean
)Set this attribute to set up table-level supplemental logging for the Oracle database. This attribute enables PRIMARY KEY supplemental logging on all tables selected for a migration task.
If you use this option, you still need to enable database-level supplemental logging.
ArchivedLogDestId
— (Integer
)Specifies the ID of the destination for the archived redo logs. This value should be the same as a number in the dest_id column of the v$archived_log view. If you work with an additional redo log destination, use the
AdditionalArchivedLogDestId
option to specify the additional destination ID. Doing this improves performance by ensuring that the correct logs are accessed from the outset.AdditionalArchivedLogDestId
— (Integer
)Set this attribute with
ArchivedLogDestId
in a primary/ standby setup. This attribute is useful in the case of a switchover. In this case, DMS needs to know which destination to get archive redo logs from to read changes. This need arises because the previous primary instance is now a standby instance after switchover.Although DMS supports the use of the Oracle
RESETLOGS
option to open the database, never useRESETLOGS
unless necessary. For additional information aboutRESETLOGS
, see RMAN Data Repair Concepts in the Oracle Database Backup and Recovery User's Guide.ExtraArchivedLogDestIds
— (Array<Integer>
)Specifies the IDs of one more destinations for one or more archived redo logs. These IDs are the values of the
dest_id
column in thev$archived_log
view. Use this setting with thearchivedLogDestId
extra connection attribute in a primary-to-single setup or a primary-to-multiple-standby setup.This setting is useful in a switchover when you use an Oracle Data Guard database as a source. In this case, DMS needs information about what destination to get archive redo logs from to read changes. DMS needs this because after the switchover the previous primary is a standby instance. For example, in a primary-to-single standby setup you might apply the following settings.
archivedLogDestId=1; ExtraArchivedLogDestIds=[2]
In a primary-to-multiple-standby setup, you might apply the following settings.
archivedLogDestId=1; ExtraArchivedLogDestIds=[2,3,4]
Although DMS supports the use of the Oracle
RESETLOGS
option to open the database, never useRESETLOGS
unless it's necessary. For more information aboutRESETLOGS
, see RMAN Data Repair Concepts in the Oracle Database Backup and Recovery User's Guide.AllowSelectNestedTables
— (Boolean
)Set this attribute to
true
to enable replication of Oracle tables containing columns that are nested tables or defined types.ParallelAsmReadThreads
— (Integer
)Set this attribute to change the number of threads that DMS configures to perform a change data capture (CDC) load using Oracle Automatic Storage Management (ASM). You can specify an integer value between 2 (the default) and 8 (the maximum). Use this attribute together with the
readAheadBlocks
attribute.ReadAheadBlocks
— (Integer
)Set this attribute to change the number of read-ahead blocks that DMS configures to perform a change data capture (CDC) load using Oracle Automatic Storage Management (ASM). You can specify an integer value between 1000 (the default) and 200,000 (the maximum).
AccessAlternateDirectly
— (Boolean
)Set this attribute to
false
in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This tells the DMS instance to not access redo logs through any specified path prefix replacement using direct file access.UseAlternateFolderForOnline
— (Boolean
)Set this attribute to
true
in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This tells the DMS instance to use any specified prefix replacement to access all online redo logs.OraclePathPrefix
— (String
)Set this string attribute to the required value in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This value specifies the default Oracle root used to access the redo logs.
UsePathPrefix
— (String
)Set this string attribute to the required value in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This value specifies the path prefix used to replace the default Oracle root to access the redo logs.
ReplacePathPrefix
— (Boolean
)Set this attribute to true in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This setting tells DMS instance to replace the default Oracle root with the specified
usePathPrefix
setting to access the redo logs.EnableHomogenousTablespace
— (Boolean
)Set this attribute to enable homogenous tablespace replication and create existing tables or indexes under the same tablespace on the target.
DirectPathNoLog
— (Boolean
)When set to
true
, this attribute helps to increase the commit rate on the Oracle target database by writing directly to tables and not writing a trail to database logs.ArchivedLogsOnly
— (Boolean
)When this field is set to
Y
, DMS only accesses the archived redo logs. If the archived redo logs are stored on Oracle ASM only, the DMS user account needs to be granted ASM privileges.AsmPassword
— (String
)For an Oracle source endpoint, your Oracle Automatic Storage Management (ASM) password. You can set this value from the
asm_user_password
value. You set this value as part of the comma-separated value that you set to thePassword
request parameter when you create the endpoint to access transaction logs using Binary Reader. For more information, see Configuration for change data capture (CDC) on an Oracle source database.AsmServer
— (String
)For an Oracle source endpoint, your ASM server address. You can set this value from the
asm_server
value. You setasm_server
as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database.AsmUser
— (String
)For an Oracle source endpoint, your ASM user name. You can set this value from the
asm_user
value. You setasm_user
as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database.CharLengthSemantics
— (String
)Specifies whether the length of a character column is in bytes or in characters. To indicate that the character column length is in characters, set this attribute to
CHAR
. Otherwise, the character column length is in bytes.Example:
Possible values include:charLengthSemantics=CHAR;
"default"
"char"
"byte"
DatabaseName
— (String
)Database name for the endpoint.
DirectPathParallelLoad
— (Boolean
)When set to
true
, this attribute specifies a parallel load whenuseDirectPathFullLoad
is set toY
. This attribute also only applies when you use the DMS parallel load feature. Note that the target table cannot have any constraints or indexes.FailTasksOnLobTruncation
— (Boolean
)When set to
true
, this attribute causes a task to fail if the actual size of an LOB column is greater than the specifiedLobMaxSize
.If a task is set to limited LOB mode and this option is set to
true
, the task fails instead of truncating the LOB data.NumberDatatypeScale
— (Integer
)Specifies the number scale. You can select a scale up to 38, or you can select FLOAT. By default, the NUMBER data type is converted to precision 38, scale 10.
Example:
numberDataTypeScale=12
Password
— (String
)Endpoint connection password.
Port
— (Integer
)Endpoint TCP port.
ReadTableSpaceName
— (Boolean
)When set to
true
, this attribute supports tablespace replication.RetryInterval
— (Integer
)Specifies the number of seconds that the system waits before resending a query.
Example:
retryInterval=6;
SecurityDbEncryption
— (String
)For an Oracle source endpoint, the transparent data encryption (TDE) password required by AWM DMS to access Oracle redo logs encrypted by TDE using Binary Reader. It is also the
TDE_Password
part of the comma-separated value you set to thePassword
request parameter when you create the endpoint. TheSecurityDbEncryptian
setting is related to thisSecurityDbEncryptionName
setting. For more information, see Supported encryption methods for using Oracle as a source for DMS in the Database Migration Service User Guide.SecurityDbEncryptionName
— (String
)For an Oracle source endpoint, the name of a key used for the transparent data encryption (TDE) of the columns and tablespaces in an Oracle source database that is encrypted using TDE. The key value is the value of the
SecurityDbEncryption
setting. For more information on setting the key name value ofSecurityDbEncryptionName
, see the information and example for setting thesecurityDbEncryptionName
extra connection attribute in Supported encryption methods for using Oracle as a source for DMS in the Database Migration Service User Guide.ServerName
— (String
)Fully qualified domain name of the endpoint.
SpatialDataOptionToGeoJsonFunctionName
— (String
)Use this attribute to convert
SDO_GEOMETRY
toGEOJSON
format. By default, DMS calls theSDO2GEOJSON
custom function if present and accessible. Or you can create your own custom function that mimics the operation ofSDOGEOJSON
and setSpatialDataOptionToGeoJsonFunctionName
to call it instead.StandbyDelayTime
— (Integer
)Use this attribute to specify a time in minutes for the delay in standby sync. If the source is an Oracle Active Data Guard standby database, use this attribute to specify the time lag between primary and standby databases.
In DMS, you can create an Oracle CDC task that uses an Active Data Guard standby instance as a source for replicating ongoing changes. Doing this eliminates the need to connect to an active database that might be in production.
Username
— (String
)Endpoint connection user name.
UseBFile
— (Boolean
)Set this attribute to Y to capture change data using the Binary Reader utility. Set
UseLogminerReader
to N to set this attribute to Y. To use Binary Reader with Amazon RDS for Oracle as the source, you set additional attributes. For more information about using this setting with Oracle Automatic Storage Management (ASM), see Using Oracle LogMiner or DMS Binary Reader for CDC.UseDirectPathFullLoad
— (Boolean
)Set this attribute to Y to have DMS use a direct path full load. Specify this value to use the direct path protocol in the Oracle Call Interface (OCI). By using this OCI protocol, you can bulk-load Oracle target tables during a full load.
UseLogminerReader
— (Boolean
)Set this attribute to Y to capture change data using the Oracle LogMiner utility (the default). Set this attribute to N if you want to access the redo logs as a binary file. When you set
UseLogminerReader
to N, also setUseBfile
to Y. For more information on this setting and using Oracle ASM, see Using Oracle LogMiner or DMS Binary Reader for CDC in the DMS User Guide.SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the Oracle endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the Oracle endpoint connection details.SecretsManagerOracleAsmAccessRoleArn
— (String
)Required only if your Oracle endpoint uses Advanced Storage Manager (ASM). The full ARN of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the
SecretsManagerOracleAsmSecret
. ThisSecretsManagerOracleAsmSecret
has the secret value that allows access to the Oracle ASM of the endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerOracleAsmSecretId
. Or you can specify clear-text values forAsmUserName
,AsmPassword
, andAsmServerName
. You can't specify both. For more information on creating thisSecretsManagerOracleAsmSecret
and theSecretsManagerOracleAsmAccessRoleArn
andSecretsManagerOracleAsmSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerOracleAsmSecretId
— (String
)Required only if your Oracle endpoint uses Advanced Storage Manager (ASM). The full ARN, partial ARN, or friendly name of the
SecretsManagerOracleAsmSecret
that contains the Oracle ASM connection details for the Oracle endpoint.
SybaseSettings
— (map
)Settings in JSON format for the source and target SAP ASE endpoint. For information about other available settings, see Extra connection attributes when using SAP ASE as a source for DMS and Extra connection attributes when using SAP ASE as a target for DMS in the Database Migration Service User Guide.
DatabaseName
— (String
)Database name for the endpoint.
Password
— (String
)Endpoint connection password.
Port
— (Integer
)Endpoint TCP port.
ServerName
— (String
)Fully qualified domain name of the endpoint.
Username
— (String
)Endpoint connection user name.
SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the SAP ASE endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the SAP SAE endpoint connection details.
MicrosoftSQLServerSettings
— (map
)Settings in JSON format for the source and target Microsoft SQL Server endpoint. For information about other available settings, see Extra connection attributes when using SQL Server as a source for DMS and Extra connection attributes when using SQL Server as a target for DMS in the Database Migration Service User Guide.
Port
— (Integer
)Endpoint TCP port.
BcpPacketSize
— (Integer
)The maximum size of the packets (in bytes) used to transfer data using BCP.
DatabaseName
— (String
)Database name for the endpoint.
ControlTablesFileGroup
— (String
)Specifies a file group for the DMS internal tables. When the replication task starts, all the internal DMS control tables (awsdms_ apply_exception, awsdms_apply, awsdms_changes) are created for the specified file group.
Password
— (String
)Endpoint connection password.
QuerySingleAlwaysOnNode
— (Boolean
)Cleans and recreates table metadata information on the replication instance when a mismatch occurs. An example is a situation where running an alter DDL statement on a table might result in different information about the table cached in the replication instance.
ReadBackupOnly
— (Boolean
)When this attribute is set to
Y
, DMS only reads changes from transaction log backups and doesn't read from the active transaction log file during ongoing replication. Setting this parameter toY
enables you to control active transaction log file growth during full load and ongoing replication tasks. However, it can add some source latency to ongoing replication.SafeguardPolicy
— (String
)Use this attribute to minimize the need to access the backup log and enable DMS to prevent truncation using one of the following two methods.
Start transactions in the database: This is the default method. When this method is used, DMS prevents TLOG truncation by mimicking a transaction in the database. As long as such a transaction is open, changes that appear after the transaction started aren't truncated. If you need Microsoft Replication to be enabled in your database, then you must choose this method.
Exclusively use sp_repldone within a single task: When this method is used, DMS reads the changes and then uses sp_repldone to mark the TLOG transactions as ready for truncation. Although this method doesn't involve any transactional activities, it can only be used when Microsoft Replication isn't running. Also, when using this method, only one DMS task can access the database at any given time. Therefore, if you need to run parallel DMS tasks against the same database, use the default method.
Possible values include:"rely-on-sql-server-replication-agent"
"exclusive-automatic-truncation"
"shared-automatic-truncation"
ServerName
— (String
)Fully qualified domain name of the endpoint.
Username
— (String
)Endpoint connection user name.
UseBcpFullLoad
— (Boolean
)Use this to attribute to transfer data for full-load operations using BCP. When the target table contains an identity column that does not exist in the source table, you must disable the use BCP for loading table option.
UseThirdPartyBackupDevice
— (Boolean
)When this attribute is set to
Y
, DMS processes third-party transaction log backups if they are created in native format.SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the SQL Server endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the SQL Server endpoint connection details.
IBMDb2Settings
— (map
)Settings in JSON format for the source IBM Db2 LUW endpoint. For information about other available settings, see Extra connection attributes when using Db2 LUW as a source for DMS in the Database Migration Service User Guide.
DatabaseName
— (String
)Database name for the endpoint.
Password
— (String
)Endpoint connection password.
Port
— (Integer
)Endpoint TCP port. The default value is 50000.
ServerName
— (String
)Fully qualified domain name of the endpoint.
SetDataCaptureChanges
— (Boolean
)Enables ongoing replication (CDC) as a BOOLEAN value. The default is true.
CurrentLsn
— (String
)For ongoing replication (CDC), use CurrentLSN to specify a log sequence number (LSN) where you want the replication to start.
MaxKBytesPerRead
— (Integer
)Maximum number of bytes per read, as a NUMBER value. The default is 64 KB.
Username
— (String
)Endpoint connection user name.
SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the Db2 LUW endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the Db2 LUW endpoint connection details.
ResourceIdentifier
— (String
)A friendly name for the resource identifier at the end of the
EndpointArn
response parameter that is returned in the createdEndpoint
object. The value for this parameter can have up to 31 characters. It can contain only ASCII letters, digits, and hyphen ('-'). Also, it can't end with a hyphen or contain two consecutive hyphens, and can only begin with a letter, such asExample-App-ARN1
. For example, this value might result in theEndpointArn
valuearn:aws:dms:eu-west-1:012345678901:rep:Example-App-ARN1
. If you don't specify aResourceIdentifier
value, DMS generates a default identifier value for the end ofEndpointArn
.DocDbSettings
— (map
)Provides information that defines a DocumentDB endpoint.
Username
— (String
)The user name you use to access the DocumentDB source endpoint.
Password
— (String
)The password for the user account you use to access the DocumentDB source endpoint.
ServerName
— (String
)The name of the server on the DocumentDB source endpoint.
Port
— (Integer
)The port value for the DocumentDB source endpoint.
DatabaseName
— (String
)The database name on the DocumentDB source endpoint.
NestingLevel
— (String
)Specifies either document or table mode.
Default value is
Possible values include:"none"
. Specify"none"
to use document mode. Specify"one"
to use table mode."none"
"one"
ExtractDocId
— (Boolean
)Specifies the document ID. Use this setting when
NestingLevel
is set to"none"
.Default value is
"false"
.DocsToInvestigate
— (Integer
)Indicates the number of documents to preview to determine the document organization. Use this setting when
NestingLevel
is set to"one"
.Must be a positive value greater than
0
. Default value is1000
.KmsKeyId
— (String
)The KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the
KmsKeyId
parameter, then DMS uses your default encryption key. KMS creates the default encryption key for your Amazon Web Services account. Your Amazon Web Services account has a different default encryption key for each Amazon Web Services Region.SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the DocumentDB endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the DocumentDB endpoint connection details.
RedisSettings
— (map
)Settings in JSON format for the target Redis endpoint.
ServerName
— required — (String
)Fully qualified domain name of the endpoint.
Port
— required — (Integer
)Transmission Control Protocol (TCP) port for the endpoint.
SslSecurityProtocol
— (String
)The connection to a Redis target endpoint using Transport Layer Security (TLS). Valid values include
plaintext
andssl-encryption
. The default isssl-encryption
. Thessl-encryption
option makes an encrypted connection. Optionally, you can identify an Amazon Resource Name (ARN) for an SSL certificate authority (CA) using theSslCaCertificateArn
setting. If an ARN isn't given for a CA, DMS uses the Amazon root CA.The
Possible values include:plaintext
option doesn't provide Transport Layer Security (TLS) encryption for traffic between endpoint and database."plaintext"
"ssl-encryption"
AuthType
— (String
)The type of authentication to perform when connecting to a Redis target. Options include
Possible values include:none
,auth-token
, andauth-role
. Theauth-token
option requires anAuthPassword
value to be provided. Theauth-role
option requiresAuthUserName
andAuthPassword
values to be provided."none"
"auth-role"
"auth-token"
AuthUserName
— (String
)The user name provided with the
auth-role
option of theAuthType
setting for a Redis target endpoint.AuthPassword
— (String
)The password provided with the
auth-role
andauth-token
options of theAuthType
setting for a Redis target endpoint.SslCaCertificateArn
— (String
)The Amazon Resource Name (ARN) for the certificate authority (CA) that DMS uses to connect to your Redis target endpoint.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:Endpoint
— (map
)The endpoint that was created.
EndpointIdentifier
— (String
)The database endpoint identifier. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens.
EndpointType
— (String
)The type of endpoint. Valid values are
Possible values include:source
andtarget
."source"
"target"
EngineName
— (String
)The database engine name. Valid values, depending on the EndpointType, include
"mysql"
,"oracle"
,"postgres"
,"mariadb"
,"aurora"
,"aurora-postgresql"
,"redshift"
,"s3"
,"db2"
,"azuredb"
,"sybase"
,"dynamodb"
,"mongodb"
,"kinesis"
,"kafka"
,"elasticsearch"
,"documentdb"
,"sqlserver"
, and"neptune"
.EngineDisplayName
— (String
)The expanded name for the engine name. For example, if the
EngineName
parameter is "aurora," this value would be "Amazon Aurora MySQL."Username
— (String
)The user name used to connect to the endpoint.
ServerName
— (String
)The name of the server at the endpoint.
Port
— (Integer
)The port value used to access the endpoint.
DatabaseName
— (String
)The name of the database at the endpoint.
ExtraConnectionAttributes
— (String
)Additional connection attributes used to connect to the endpoint.
Status
— (String
)The status of the endpoint.
KmsKeyId
— (String
)An KMS key identifier that is used to encrypt the connection parameters for the endpoint.
If you don't specify a value for the
KmsKeyId
parameter, then DMS uses your default encryption key.KMS creates the default encryption key for your Amazon Web Services account. Your Amazon Web Services account has a different default encryption key for each Amazon Web Services Region.
EndpointArn
— (String
)The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
CertificateArn
— (String
)The Amazon Resource Name (ARN) used for SSL connection to the endpoint.
SslMode
— (String
)The SSL mode used to connect to the endpoint. The default value is
Possible values include:none
."none"
"require"
"verify-ca"
"verify-full"
ServiceAccessRoleArn
— (String
)The Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the
iam:PassRole
action.ExternalTableDefinition
— (String
)The external table definition.
ExternalId
— (String
)Value returned by a call to CreateEndpoint that can be used for cross-account validation. Use it on a subsequent call to CreateEndpoint to create the endpoint with a cross-account.
DynamoDbSettings
— (map
)The settings for the DynamoDB target endpoint. For more information, see the
DynamoDBSettings
structure.ServiceAccessRoleArn
— required — (String
)The Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the
iam:PassRole
action.
S3Settings
— (map
)The settings for the S3 target endpoint. For more information, see the
S3Settings
structure.ServiceAccessRoleArn
— (String
)The Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the
iam:PassRole
action. It is a required parameter that enables DMS to write and read objects from an S3 bucket.ExternalTableDefinition
— (String
)Specifies how tables are defined in the S3 source files only.
CsvRowDelimiter
— (String
)The delimiter used to separate rows in the .csv file for both source and target. The default is a carriage return (
\n
).CsvDelimiter
— (String
)The delimiter used to separate columns in the .csv file for both source and target. The default is a comma.
BucketFolder
— (String
)An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path
bucketFolder/schema_name/table_name/
. If this parameter isn't specified, then the path used isschema_name/table_name/
.BucketName
— (String
)The name of the S3 bucket.
CompressionType
— (String
)An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed. This parameter applies to both .csv and .parquet file formats.
Possible values include:"none"
"gzip"
EncryptionMode
— (String
)The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either
SSE_S3
(the default) orSSE_KMS
.Note: For theModifyEndpoint
operation, you can change the existing value of theEncryptionMode
parameter fromSSE_KMS
toSSE_S3
. But you can’t change the existing value fromSSE_S3
toSSE_KMS
.To use
SSE_S3
, you need an Identity and Access Management (IAM) role with permission to allow"arn:aws:s3:::dms-*"
to use the following actions:-
s3:CreateBucket
-
s3:ListBucket
-
s3:DeleteBucket
-
s3:GetBucketLocation
-
s3:GetObject
-
s3:PutObject
-
s3:DeleteObject
-
s3:GetObjectVersion
-
s3:GetBucketPolicy
-
s3:PutBucketPolicy
-
s3:DeleteBucketPolicy
"sse-s3"
"sse-kms"
-
ServerSideEncryptionKmsKeyId
— (String
)If you are using
SSE_KMS
for theEncryptionMode
, provide the KMS key ID. The key that you use needs an attached policy that enables Identity and Access Management (IAM) user permissions and allows use of the key.Here is a CLI example:
aws dms create-endpoint --endpoint-identifier value --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=value,BucketFolder=value,BucketName=value,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=value
DataFormat
— (String
)The format of the data that you want to use for output. You can choose one of the following:
-
csv
: This is a row-based file format with comma-separated values (.csv). -
parquet
: Apache Parquet (.parquet) is a columnar storage file format that features efficient compression and provides faster query response.
"csv"
"parquet"
-
EncodingType
— (String
)The type of encoding you are using:
-
RLE_DICTIONARY
uses a combination of bit-packing and run-length encoding to store repeated values more efficiently. This is the default. -
PLAIN
doesn't use encoding at all. Values are stored as they are. -
PLAIN_DICTIONARY
builds a dictionary of the values encountered in a given column. The dictionary is stored in a dictionary page for each column chunk.
"plain"
"plain-dictionary"
"rle-dictionary"
-
DictPageSizeLimit
— (Integer
)The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of
PLAIN
. This parameter defaults to 1024 * 1024 bytes (1 MiB), the maximum size of a dictionary page before it reverts toPLAIN
encoding. This size is used for .parquet file format only.RowGroupLength
— (Integer
)The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used for .parquet file format only.
If you choose a value larger than the maximum,
RowGroupLength
is set to the max row group length in bytes (64 * 1024 * 1024).DataPageSize
— (Integer
)The size of one data page in bytes. This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is used for .parquet file format only.
ParquetVersion
— (String
)The version of the Apache Parquet format that you want to use:
Possible values include:parquet_1_0
(the default) orparquet_2_0
."parquet-1-0"
"parquet-2-0"
EnableStatistics
— (Boolean
)A value that enables statistics for Parquet pages and row groups. Choose
true
to enable statistics,false
to disable. Statistics includeNULL
,DISTINCT
,MAX
, andMIN
values. This parameter defaults totrue
. This value is used for .parquet file format only.IncludeOpForFullLoad
— (Boolean
)A value that enables a full load to write INSERT operations to the comma-separated value (.csv) output files only to indicate how the rows were added to the source database.
Note: DMS supports theIncludeOpForFullLoad
parameter in versions 3.1.4 and later.For full load, records can only be inserted. By default (the
false
setting), no information is recorded in these output files for a full load to indicate that the rows were inserted at the source database. IfIncludeOpForFullLoad
is set totrue
ory
, the INSERT is recorded as an I annotation in the first field of the .csv file. This allows the format of your target records from a full load to be consistent with the target records from a CDC load.Note: This setting works together with theCdcInsertsOnly
and theCdcInsertsAndUpdates
parameters for output to .csv files only. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..CdcInsertsOnly
— (Boolean
)A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage (.parquet) output files. By default (the
false
setting), the first field in a .csv or .parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was inserted, updated, or deleted at the source database for a CDC load to the target.If
CdcInsertsOnly
is set totrue
ory
, only INSERTs from the source database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded depends on the value ofIncludeOpForFullLoad
. IfIncludeOpForFullLoad
is set totrue
, the first field of every CDC record is set to I to indicate the INSERT operation at the source. IfIncludeOpForFullLoad
is set tofalse
, every CDC record is written without a first field to indicate the INSERT operation at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..Note: DMS supports the interaction described preceding between theCdcInsertsOnly
andIncludeOpForFullLoad
parameters in versions 3.1.4 and later.CdcInsertsOnly
andCdcInsertsAndUpdates
can't both be set totrue
for the same endpoint. Set eitherCdcInsertsOnly
orCdcInsertsAndUpdates
totrue
for the same endpoint, but not both.TimestampColumnName
— (String
)A value that when nonblank causes DMS to add a column with timestamp information to the endpoint data for an Amazon S3 target.
Note: DMS supports theTimestampColumnName
parameter in versions 3.1.4 and later.DMS includes an additional
STRING
column in the .csv or .parquet object files of your migrated data when you setTimestampColumnName
to a nonblank value.For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS.
For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.
The string format for this timestamp column value is
yyyy-MM-dd HH:mm:ss.SSSSSS
. By default, the precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on the commit timestamp supported by DMS for the source database.When the
AddColumnName
parameter is set totrue
, DMS also includes a name for the timestamp column that you set withTimestampColumnName
.ParquetTimestampInMillisecond
— (Boolean
)A value that specifies the precision of any
TIMESTAMP
column values that are written to an Amazon S3 object file in .parquet format.Note: DMS supports theParquetTimestampInMillisecond
parameter in versions 3.1.4 and later.When
ParquetTimestampInMillisecond
is set totrue
ory
, DMS writes allTIMESTAMP
columns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes them with microsecond precision.Currently, Amazon Athena and Glue can handle only millisecond precision for
TIMESTAMP
values. Set this parameter totrue
for S3 endpoint object files that are .parquet formatted only if you plan to query or process the data with Athena or Glue.Note: DMS writes anyTIMESTAMP
column values written to an S3 file in .csv format with microsecond precision. SettingParquetTimestampInMillisecond
has no effect on the string format of the timestamp column value that is inserted by setting theTimestampColumnName
parameter.CdcInsertsAndUpdates
— (Boolean
)A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet (columnar storage) output files. The default setting is
false
, but whenCdcInsertsAndUpdates
is set totrue
ory
, only INSERTs and UPDATEs from the source database are migrated to the .csv or .parquet file.For .csv file format only, how these INSERTs and UPDATEs are recorded depends on the value of the
IncludeOpForFullLoad
parameter. IfIncludeOpForFullLoad
is set totrue
, the first field of every CDC record is set to eitherI
orU
to indicate INSERT and UPDATE operations at the source. But ifIncludeOpForFullLoad
is set tofalse
, CDC records are written without an indication of INSERT or UPDATE operations at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..Note: DMS supports the use of theCdcInsertsAndUpdates
parameter in versions 3.3.1 and later.CdcInsertsOnly
andCdcInsertsAndUpdates
can't both be set totrue
for the same endpoint. Set eitherCdcInsertsOnly
orCdcInsertsAndUpdates
totrue
for the same endpoint, but not both.DatePartitionEnabled
— (Boolean
)When set to
true
, this parameter partitions S3 bucket folders based on transaction commit dates. The default value isfalse
. For more information about date-based folder partitioning, see Using date-based folder partitioning.DatePartitionSequence
— (String
)Identifies the sequence of the date format to use during folder partitioning. The default value is
Possible values include:YYYYMMDD
. Use this parameter whenDatePartitionedEnabled
is set totrue
."YYYYMMDD"
"YYYYMMDDHH"
"YYYYMM"
"MMYYYYDD"
"DDMMYYYY"
DatePartitionDelimiter
— (String
)Specifies a date separating delimiter to use during folder partitioning. The default value is
Possible values include:SLASH
. Use this parameter whenDatePartitionedEnabled
is set totrue
."SLASH"
"UNDERSCORE"
"DASH"
"NONE"
UseCsvNoSupValue
— (Boolean
)This setting applies if the S3 output files during a change data capture (CDC) load are written in .csv format. If set to
true
for columns not included in the supplemental log, DMS uses the value specified byCsvNoSupValue
. If not set or set tofalse
, DMS uses the null value for these columns.Note: This setting is supported in DMS versions 3.4.1 and later.CsvNoSupValue
— (String
)This setting only applies if your Amazon S3 output files during a change data capture (CDC) load are written in .csv format. If
UseCsvNoSupValue
is set to true, specify a string value that you want DMS to use for all columns not included in the supplemental log. If you do not specify a string value, DMS uses the null value for these columns regardless of theUseCsvNoSupValue
setting.Note: This setting is supported in DMS versions 3.4.1 and later.PreserveTransactions
— (Boolean
)If set to
true
, DMS saves the transaction order for a change data capture (CDC) load on the Amazon S3 target specified byCdcPath
. For more information, see Capturing data changes (CDC) including transaction order on the S3 target.Note: This setting is supported in DMS versions 3.4.2 and later.CdcPath
— (String
)Specifies the folder path of CDC files. For an S3 source, this setting is required if a task captures change data; otherwise, it's optional. If
CdcPath
is set, DMS reads CDC files from this path and replicates the data changes to the target endpoint. For an S3 target if you setPreserveTransactions
totrue
, DMS verifies that you have set this parameter to a folder path on your S3 target where DMS can save the transaction order for the CDC load. DMS creates this CDC folder path in either your S3 target working directory or the S3 target location specified byBucketFolder
andBucketName
.For example, if you specify
CdcPath
asMyChangedData
, and you specifyBucketName
asMyTargetBucket
but do not specifyBucketFolder
, DMS creates the CDC folder path following:MyTargetBucket/MyChangedData
.If you specify the same
CdcPath
, and you specifyBucketName
asMyTargetBucket
andBucketFolder
asMyTargetData
, DMS creates the CDC folder path following:MyTargetBucket/MyTargetData/MyChangedData
.For more information on CDC including transaction order on an S3 target, see Capturing data changes (CDC) including transaction order on the S3 target.
Note: This setting is supported in DMS versions 3.4.2 and later.CannedAclForObjects
— (String
)A value that enables DMS to specify a predefined (canned) access control list for objects created in an Amazon S3 bucket as .csv or .parquet files. For more information about Amazon S3 canned ACLs, see Canned ACL in the Amazon S3 Developer Guide.
The default value is NONE. Valid values include NONE, PRIVATE, PUBLIC_READ, PUBLIC_READ_WRITE, AUTHENTICATED_READ, AWS_EXEC_READ, BUCKET_OWNER_READ, and BUCKET_OWNER_FULL_CONTROL.
Possible values include:"none"
"private"
"public-read"
"public-read-write"
"authenticated-read"
"aws-exec-read"
"bucket-owner-read"
"bucket-owner-full-control"
AddColumnName
— (Boolean
)An optional parameter that, when set to
true
ory
, you can use to add column name information to the .csv output file.The default value is
false
. Valid values aretrue
,false
,y
, andn
.CdcMaxBatchInterval
— (Integer
)Maximum length of the interval, defined in seconds, after which to output a file to Amazon S3.
When
CdcMaxBatchInterval
andCdcMinFileSize
are both specified, the file write is triggered by whichever parameter condition is met first within an DMS CloudFormation template.The default value is 60 seconds.
CdcMinFileSize
— (Integer
)Minimum file size, defined in megabytes, to reach for a file output to Amazon S3.
When
CdcMinFileSize
andCdcMaxBatchInterval
are both specified, the file write is triggered by whichever parameter condition is met first within an DMS CloudFormation template.The default value is 32 MB.
CsvNullValue
— (String
)An optional parameter that specifies how DMS treats null values. While handling the null value, you can use this parameter to pass a user-defined string as null when writing to the target. For example, when target columns are not nullable, you can use this option to differentiate between the empty string value and the null value. So, if you set this parameter value to the empty string ("" or ''), DMS treats the empty string as the null value instead of
NULL
.The default value is
NULL
. Valid values include any valid string.IgnoreHeaderRows
— (Integer
)When this value is set to 1, DMS ignores the first row header in a .csv file. A value of 1 turns on the feature; a value of 0 turns off the feature.
The default is 0.
MaxFileSize
— (Integer
)A value that specifies the maximum size (in KB) of any .csv file to be created while migrating to an S3 target during full load.
The default value is 1,048,576 KB (1 GB). Valid values include 1 to 1,048,576.
Rfc4180
— (Boolean
)For an S3 source, when this value is set to
true
ory
, each leading double quotation mark has to be followed by an ending double quotation mark. This formatting complies with RFC 4180. When this value is set tofalse
orn
, string literals are copied to the target as is. In this case, a delimiter (row or column) signals the end of the field. Thus, you can't use a delimiter as part of the string, because it signals the end of the value.For an S3 target, an optional parameter used to set behavior to comply with RFC 4180 for data migrated to Amazon S3 using .csv file format only. When this value is set to
true
ory
using Amazon S3 as a target, if the data has quotation marks or newline characters in it, DMS encloses the entire column with an additional pair of double quotation marks ("). Every quotation mark within the data is repeated twice.The default value is
true
. Valid values includetrue
,false
,y
, andn
.
DmsTransferSettings
— (map
)The settings in JSON format for the DMS transfer type of source endpoint.
Possible settings include the following:
-
ServiceAccessRoleArn
- - The Amazon Resource Name (ARN) used by the service access IAM role. The role must allow theiam:PassRole
action. -
BucketName
- The name of the S3 bucket to use.
Shorthand syntax for these settings is as follows:
ServiceAccessRoleArn=string,BucketName=string,
JSON syntax for these settings is as follows:
{ "ServiceAccessRoleArn": "string", "BucketName": "string"}
ServiceAccessRoleArn
— (String
)The Amazon Resource Name (ARN) used by the service access IAM role. The role must allow the
iam:PassRole
action.BucketName
— (String
)The name of the S3 bucket to use.
-
MongoDbSettings
— (map
)The settings for the MongoDB source endpoint. For more information, see the
MongoDbSettings
structure.Username
— (String
)The user name you use to access the MongoDB source endpoint.
Password
— (String
)The password for the user account you use to access the MongoDB source endpoint.
ServerName
— (String
)The name of the server on the MongoDB source endpoint.
Port
— (Integer
)The port value for the MongoDB source endpoint.
DatabaseName
— (String
)The database name on the MongoDB source endpoint.
AuthType
— (String
)The authentication type you use to access the MongoDB source endpoint.
When when set to
Possible values include:"no"
, user name and password parameters are not used and can be empty."no"
"password"
AuthMechanism
— (String
)The authentication mechanism you use to access the MongoDB source endpoint.
For the default value, in MongoDB version 2.x,
Possible values include:"default"
is"mongodb_cr"
. For MongoDB version 3.x or later,"default"
is"scram_sha_1"
. This setting isn't used whenAuthType
is set to"no"
."default"
"mongodb_cr"
"scram_sha_1"
NestingLevel
— (String
)Specifies either document or table mode.
Default value is
Possible values include:"none"
. Specify"none"
to use document mode. Specify"one"
to use table mode."none"
"one"
ExtractDocId
— (String
)Specifies the document ID. Use this setting when
NestingLevel
is set to"none"
.Default value is
"false"
.DocsToInvestigate
— (String
)Indicates the number of documents to preview to determine the document organization. Use this setting when
NestingLevel
is set to"one"
.Must be a positive value greater than
0
. Default value is1000
.AuthSource
— (String
)The MongoDB database name. This setting isn't used when
AuthType
is set to"no"
.The default is
"admin"
.KmsKeyId
— (String
)The KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the
KmsKeyId
parameter, then DMS uses your default encryption key. KMS creates the default encryption key for your Amazon Web Services account. Your Amazon Web Services account has a different default encryption key for each Amazon Web Services Region.SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the MongoDB endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the MongoDB endpoint connection details.
KinesisSettings
— (map
)The settings for the Amazon Kinesis target endpoint. For more information, see the
KinesisSettings
structure.StreamArn
— (String
)The Amazon Resource Name (ARN) for the Amazon Kinesis Data Streams endpoint.
MessageFormat
— (String
)The output format for the records created on the endpoint. The message format is
Possible values include:JSON
(default) orJSON_UNFORMATTED
(a single line with no tab)."json"
"json-unformatted"
ServiceAccessRoleArn
— (String
)The Amazon Resource Name (ARN) for the IAM role that DMS uses to write to the Kinesis data stream. The role must allow the
iam:PassRole
action.IncludeTransactionDetails
— (Boolean
)Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for
transaction_id
, previoustransaction_id
, andtransaction_record_id
(the record offset within a transaction). The default isfalse
.IncludePartitionValue
— (Boolean
)Shows the partition value within the Kinesis message output, unless the partition type is
schema-table-type
. The default isfalse
.PartitionIncludeSchemaTable
— (Boolean
)Prefixes schema and table names to partition values, when the partition type is
primary-key-type
. Doing this increases data distribution among Kinesis shards. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same shard, which causes throttling. The default isfalse
.IncludeTableAlterOperations
— (Boolean
)Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table
,drop-table
,add-column
,drop-column
, andrename-column
. The default isfalse
.IncludeControlDetails
— (Boolean
)Shows detailed control information for table definition, column definition, and table and column changes in the Kinesis message output. The default is
false
.IncludeNullAndEmpty
— (Boolean
)Include NULL and empty columns for records migrated to the endpoint. The default is
false
.NoHexPrefix
— (Boolean
)Set this optional parameter to
true
to avoid adding a '0x' prefix to raw data in hexadecimal format. For example, by default, DMS adds a '0x' prefix to the LOB column type in hexadecimal format moving from an Oracle source to an Amazon Kinesis target. Use theNoHexPrefix
endpoint setting to enable migration of RAW data type columns without adding the '0x' prefix.
KafkaSettings
— (map
)The settings for the Apache Kafka target endpoint. For more information, see the
KafkaSettings
structure.Broker
— (String
)A comma-separated list of one or more broker locations in your Kafka cluster that host your Kafka instance. Specify each broker location in the form
broker-hostname-or-ip:port
. For example,"ec2-12-345-678-901.compute-1.amazonaws.com:2345"
. For more information and examples of specifying a list of broker locations, see Using Apache Kafka as a target for Database Migration Service in the Database Migration Service User Guide.Topic
— (String
)The topic to which you migrate the data. If you don't specify a topic, DMS specifies
"kafka-default-topic"
as the migration topic.MessageFormat
— (String
)The output format for the records created on the endpoint. The message format is
Possible values include:JSON
(default) orJSON_UNFORMATTED
(a single line with no tab)."json"
"json-unformatted"
IncludeTransactionDetails
— (Boolean
)Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for
transaction_id
, previoustransaction_id
, andtransaction_record_id
(the record offset within a transaction). The default isfalse
.IncludePartitionValue
— (Boolean
)Shows the partition value within the Kafka message output unless the partition type is
schema-table-type
. The default isfalse
.PartitionIncludeSchemaTable
— (Boolean
)Prefixes schema and table names to partition values, when the partition type is
primary-key-type
. Doing this increases data distribution among Kafka partitions. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same partition, which causes throttling. The default isfalse
.IncludeTableAlterOperations
— (Boolean
)Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table
,drop-table
,add-column
,drop-column
, andrename-column
. The default isfalse
.IncludeControlDetails
— (Boolean
)Shows detailed control information for table definition, column definition, and table and column changes in the Kafka message output. The default is
false
.MessageMaxBytes
— (Integer
)The maximum size in bytes for records created on the endpoint The default is 1,000,000.
IncludeNullAndEmpty
— (Boolean
)Include NULL and empty columns for records migrated to the endpoint. The default is
false
.SecurityProtocol
— (String
)Set secure connection to a Kafka target endpoint using Transport Layer Security (TLS). Options include
Possible values include:ssl-encryption
,ssl-authentication
, andsasl-ssl
.sasl-ssl
requiresSaslUsername
andSaslPassword
."plaintext"
"ssl-authentication"
"ssl-encryption"
"sasl-ssl"
SslClientCertificateArn
— (String
)The Amazon Resource Name (ARN) of the client certificate used to securely connect to a Kafka target endpoint.
SslClientKeyArn
— (String
)The Amazon Resource Name (ARN) for the client private key used to securely connect to a Kafka target endpoint.
SslClientKeyPassword
— (String
)The password for the client private key used to securely connect to a Kafka target endpoint.
SslCaCertificateArn
— (String
)The Amazon Resource Name (ARN) for the private certificate authority (CA) cert that DMS uses to securely connect to your Kafka target endpoint.
SaslUsername
— (String
)The secure user name you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.
SaslPassword
— (String
)The secure password you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.
NoHexPrefix
— (Boolean
)Set this optional parameter to
true
to avoid adding a '0x' prefix to raw data in hexadecimal format. For example, by default, DMS adds a '0x' prefix to the LOB column type in hexadecimal format moving from an Oracle source to a Kafka target. Use theNoHexPrefix
endpoint setting to enable migration of RAW data type columns without adding the '0x' prefix.
ElasticsearchSettings
— (map
)The settings for the Elasticsearch source endpoint. For more information, see the
ElasticsearchSettings
structure.ServiceAccessRoleArn
— required — (String
)The Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the
iam:PassRole
action.EndpointUri
— required — (String
)The endpoint for the Elasticsearch cluster. DMS uses HTTPS if a transport protocol (http/https) is not specified.
FullLoadErrorPercentage
— (Integer
)The maximum percentage of records that can fail to be written before a full load operation stops.
To avoid early failure, this counter is only effective after 1000 records are transferred. Elasticsearch also has the concept of error monitoring during the last 10 minutes of an Observation Window. If transfer of all records fail in the last 10 minutes, the full load operation stops.
ErrorRetryDuration
— (Integer
)The maximum number of seconds for which DMS retries failed API requests to the Elasticsearch cluster.
NeptuneSettings
— (map
)The settings for the Amazon Neptune target endpoint. For more information, see the
NeptuneSettings
structure.ServiceAccessRoleArn
— (String
)The Amazon Resource Name (ARN) of the service role that you created for the Neptune target endpoint. The role must allow the
iam:PassRole
action. For more information, see Creating an IAM Service Role for Accessing Amazon Neptune as a Target in the Database Migration Service User Guide.S3BucketName
— required — (String
)The name of the Amazon S3 bucket where DMS can temporarily store migrated graph data in .csv files before bulk-loading it to the Neptune target database. DMS maps the SQL source data to graph data before storing it in these .csv files.
S3BucketFolder
— required — (String
)A folder path where you want DMS to store migrated graph data in the S3 bucket specified by
S3BucketName
ErrorRetryDuration
— (Integer
)The number of milliseconds for DMS to wait to retry a bulk-load of migrated graph data to the Neptune target database before raising an error. The default is 250.
MaxFileSize
— (Integer
)The maximum size in kilobytes of migrated graph data stored in a .csv file before DMS bulk-loads the data to the Neptune target database. The default is 1,048,576 KB. If the bulk load is successful, DMS clears the bucket, ready to store the next batch of migrated graph data.
MaxRetryCount
— (Integer
)The number of times for DMS to retry a bulk load of migrated graph data to the Neptune target database before raising an error. The default is 5.
IamAuthEnabled
— (Boolean
)If you want Identity and Access Management (IAM) authorization enabled for this endpoint, set this parameter to
true
. Then attach the appropriate IAM policy document to your service role specified byServiceAccessRoleArn
. The default isfalse
.
RedshiftSettings
— (map
)Settings for the Amazon Redshift endpoint.
AcceptAnyDate
— (Boolean
)A value that indicates to allow any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error. You can choose
true
orfalse
(the default).This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data doesn't match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.
AfterConnectScript
— (String
)Code to run after connecting. This parameter should contain the code itself, not the name of a file containing the code.
BucketFolder
— (String
)An S3 folder where the comma-separated-value (.csv) files are stored before being uploaded to the target Redshift cluster.
For full load mode, DMS converts source records into .csv files and loads them to the BucketFolder/TableID path. DMS uses the Redshift
COPY
command to upload the .csv files to the target table. The files are deleted once theCOPY
operation has finished. For more information, see COPY in the Amazon Redshift Database Developer Guide.For change-data-capture (CDC) mode, DMS creates a NetChanges table, and loads the .csv files to this BucketFolder/NetChangesTableID path.
BucketName
— (String
)The name of the intermediate S3 bucket used to store .csv files before uploading data to Redshift.
CaseSensitiveNames
— (Boolean
)If Amazon Redshift is configured to support case sensitive schema names, set
CaseSensitiveNames
totrue
. The default isfalse
.CompUpdate
— (Boolean
)If you set
CompUpdate
totrue
Amazon Redshift applies automatic compression if the table is empty. This applies even if the table columns already have encodings other thanRAW
. If you setCompUpdate
tofalse
, automatic compression is disabled and existing column encodings aren't changed. The default istrue
.ConnectionTimeout
— (Integer
)A value that sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.
DatabaseName
— (String
)The name of the Amazon Redshift data warehouse (service) that you are working with.
DateFormat
— (String
)The date format that you are using. Valid values are
auto
(case-sensitive), your date format string enclosed in quotes, or NULL. If this parameter is left unset (NULL), it defaults to a format of 'YYYY-MM-DD'. Usingauto
recognizes most strings, even some that aren't supported when you use a date format string.If your date and time values use formats different from each other, set this to
auto
.EmptyAsNull
— (Boolean
)A value that specifies whether DMS should migrate empty CHAR and VARCHAR fields as NULL. A value of
true
sets empty CHAR and VARCHAR fields to null. The default isfalse
.EncryptionMode
— (String
)The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either
SSE_S3
(the default) orSSE_KMS
.Note: For theModifyEndpoint
operation, you can change the existing value of theEncryptionMode
parameter fromSSE_KMS
toSSE_S3
. But you can’t change the existing value fromSSE_S3
toSSE_KMS
.To use
Possible values include:SSE_S3
, create an Identity and Access Management (IAM) role with a policy that allows"arn:aws:s3:::*"
to use the following actions:"s3:PutObject", "s3:ListBucket"
"sse-s3"
"sse-kms"
ExplicitIds
— (Boolean
)This setting is only valid for a full-load migration task. Set
ExplicitIds
totrue
to have tables withIDENTITY
columns override their auto-generated values with explicit values loaded from the source data files used to populate the tables. The default isfalse
.FileTransferUploadStreams
— (Integer
)The number of threads used to upload a single file. This parameter accepts a value from 1 through 64. It defaults to 10.
The number of parallel streams used to upload a single .csv file to an S3 bucket using S3 Multipart Upload. For more information, see Multipart upload overview.
FileTransferUploadStreams
accepts a value from 1 through 64. It defaults to 10.LoadTimeout
— (Integer
)The amount of time to wait (in milliseconds) before timing out of operations performed by DMS on a Redshift cluster, such as Redshift COPY, INSERT, DELETE, and UPDATE.
MaxFileSize
— (Integer
)The maximum size (in KB) of any .csv file used to load data on an S3 bucket and transfer data to Amazon Redshift. It defaults to 1048576KB (1 GB).
Password
— (String
)The password for the user named in the
username
property.Port
— (Integer
)The port number for Amazon Redshift. The default value is 5439.
RemoveQuotes
— (Boolean
)A value that specifies to remove surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained. Choose
true
to remove quotation marks. The default isfalse
.ReplaceInvalidChars
— (String
)A list of characters that you want to replace. Use with
ReplaceChars
.ReplaceChars
— (String
)A value that specifies to replaces the invalid characters specified in
ReplaceInvalidChars
, substituting the specified characters instead. The default is"?"
.ServerName
— (String
)The name of the Amazon Redshift cluster you are using.
ServiceAccessRoleArn
— (String
)The Amazon Resource Name (ARN) of the IAM role that has access to the Amazon Redshift service. The role must allow the
iam:PassRole
action.ServerSideEncryptionKmsKeyId
— (String
)The KMS key ID. If you are using
SSE_KMS
for theEncryptionMode
, provide this key ID. The key that you use needs an attached policy that enables IAM user permissions and allows use of the key.TimeFormat
— (String
)The time format that you want to use. Valid values are
auto
(case-sensitive),'timeformat_string'
,'epochsecs'
, or'epochmillisecs'
. It defaults to 10. Usingauto
recognizes most strings, even some that aren't supported when you use a time format string.If your date and time values use formats different from each other, set this parameter to
auto
.TrimBlanks
— (Boolean
)A value that specifies to remove the trailing white space characters from a VARCHAR string. This parameter applies only to columns with a VARCHAR data type. Choose
true
to remove unneeded white space. The default isfalse
.TruncateColumns
— (Boolean
)A value that specifies to truncate data in columns to the appropriate number of characters, so that the data fits in the column. This parameter applies only to columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. Choose
true
to truncate data. The default isfalse
.Username
— (String
)An Amazon Redshift user name for a registered user.
WriteBufferSize
— (Integer
)The size (in KB) of the in-memory file write buffer used when generating .csv files on the local disk at the DMS replication instance. The default value is 1000 (buffer size is 1000KB).
SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the Amazon Redshift endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the Amazon Redshift endpoint connection details.
PostgreSQLSettings
— (map
)The settings for the PostgreSQL source and target endpoint. For more information, see the
PostgreSQLSettings
structure.AfterConnectScript
— (String
)For use with change data capture (CDC) only, this attribute has DMS bypass foreign keys and user triggers to reduce the time it takes to bulk load data.
Example:
afterConnectScript=SET session_replication_role='replica'
CaptureDdls
— (Boolean
)To capture DDL events, DMS creates various artifacts in the PostgreSQL database when the task starts. You can later remove these artifacts.
If this value is set to
N
, you don't have to create tables or triggers on the source database.MaxFileSize
— (Integer
)Specifies the maximum size (in KB) of any .csv file used to transfer data to PostgreSQL.
Example:
maxFileSize=512
DatabaseName
— (String
)Database name for the endpoint.
DdlArtifactsSchema
— (String
)The schema in which the operational DDL database artifacts are created.
Example:
ddlArtifactsSchema=xyzddlschema;
ExecuteTimeout
— (Integer
)Sets the client statement timeout for the PostgreSQL instance, in seconds. The default value is 60 seconds.
Example:
executeTimeout=100;
FailTasksOnLobTruncation
— (Boolean
)When set to
true
, this value causes a task to fail if the actual size of a LOB column is greater than the specifiedLobMaxSize
.If task is set to Limited LOB mode and this option is set to true, the task fails instead of truncating the LOB data.
HeartbeatEnable
— (Boolean
)The write-ahead log (WAL) heartbeat feature mimics a dummy transaction. By doing this, it prevents idle logical replication slots from holding onto old WAL logs, which can result in storage full situations on the source. This heartbeat keeps
restart_lsn
moving and prevents storage full scenarios.HeartbeatSchema
— (String
)Sets the schema in which the heartbeat artifacts are created.
HeartbeatFrequency
— (Integer
)Sets the WAL heartbeat frequency (in minutes).
Password
— (String
)Endpoint connection password.
Port
— (Integer
)Endpoint TCP port.
ServerName
— (String
)Fully qualified domain name of the endpoint.
Username
— (String
)Endpoint connection user name.
SlotName
— (String
)Sets the name of a previously created logical replication slot for a change data capture (CDC) load of the PostgreSQL source instance.
When used with the
CdcStartPosition
request parameter for the DMS API , this attribute also makes it possible to use native CDC start points. DMS verifies that the specified logical replication slot exists before starting the CDC load task. It also verifies that the task was created with a valid setting ofCdcStartPosition
. If the specified slot doesn't exist or the task doesn't have a validCdcStartPosition
setting, DMS raises an error.For more information about setting the
CdcStartPosition
request parameter, see Determining a CDC native start point in the Database Migration Service User Guide. For more information about usingCdcStartPosition
, see CreateReplicationTask, StartReplicationTask, and ModifyReplicationTask.PluginName
— (String
)Specifies the plugin to use to create a replication slot.
Possible values include:"no-preference"
"test-decoding"
"pglogical"
SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the PostgreSQL endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the PostgreSQL endpoint connection details.
MySQLSettings
— (map
)The settings for the MySQL source and target endpoint. For more information, see the
MySQLSettings
structure.AfterConnectScript
— (String
)Specifies a script to run immediately after DMS connects to the endpoint. The migration task continues running regardless if the SQL statement succeeds or fails.
For this parameter, provide the code of the script itself, not the name of a file containing the script.
CleanSourceMetadataOnMismatch
— (Boolean
)Adjusts the behavior of DMS when migrating from an SQL Server source database that is hosted as part of an Always On availability group cluster. If you need DMS to poll all the nodes in the Always On cluster for transaction backups, set this attribute to
false
.DatabaseName
— (String
)Database name for the endpoint. For a MySQL source or target endpoint, don't explicitly specify the database using the
DatabaseName
request parameter on either theCreateEndpoint
orModifyEndpoint
API call. SpecifyingDatabaseName
when you create or modify a MySQL endpoint replicates all the task tables to this single database. For MySQL endpoints, you specify the database only when you specify the schema in the table-mapping rules of the DMS task.EventsPollInterval
— (Integer
)Specifies how often to check the binary log for new changes/events when the database is idle.
Example:
eventsPollInterval=5;
In the example, DMS checks for changes in the binary logs every five seconds.
TargetDbType
— (String
)Specifies where to migrate source tables on the target, either to a single database or multiple databases.
Example:
Possible values include:targetDbType=MULTIPLE_DATABASES
"specific-database"
"multiple-databases"
MaxFileSize
— (Integer
)Specifies the maximum size (in KB) of any .csv file used to transfer data to a MySQL-compatible database.
Example:
maxFileSize=512
ParallelLoadThreads
— (Integer
)Improves performance when loading data into the MySQL-compatible target database. Specifies how many threads to use to load the data into the MySQL-compatible target database. Setting a large number of threads can have an adverse effect on database performance, because a separate connection is required for each thread.
Example:
parallelLoadThreads=1
Password
— (String
)Endpoint connection password.
Port
— (Integer
)Endpoint TCP port.
ServerName
— (String
)Fully qualified domain name of the endpoint.
ServerTimezone
— (String
)Specifies the time zone for the source MySQL database.
Example:
serverTimezone=US/Pacific;
Note: Do not enclose time zones in single quotes.
Username
— (String
)Endpoint connection user name.
SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the MySQL endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the MySQL endpoint connection details.
OracleSettings
— (map
)The settings for the Oracle source and target endpoint. For more information, see the
OracleSettings
structure.AddSupplementalLogging
— (Boolean
)Set this attribute to set up table-level supplemental logging for the Oracle database. This attribute enables PRIMARY KEY supplemental logging on all tables selected for a migration task.
If you use this option, you still need to enable database-level supplemental logging.
ArchivedLogDestId
— (Integer
)Specifies the ID of the destination for the archived redo logs. This value should be the same as a number in the dest_id column of the v$archived_log view. If you work with an additional redo log destination, use the
AdditionalArchivedLogDestId
option to specify the additional destination ID. Doing this improves performance by ensuring that the correct logs are accessed from the outset.AdditionalArchivedLogDestId
— (Integer
)Set this attribute with
ArchivedLogDestId
in a primary/ standby setup. This attribute is useful in the case of a switchover. In this case, DMS needs to know which destination to get archive redo logs from to read changes. This need arises because the previous primary instance is now a standby instance after switchover.Although DMS supports the use of the Oracle
RESETLOGS
option to open the database, never useRESETLOGS
unless necessary. For additional information aboutRESETLOGS
, see RMAN Data Repair Concepts in the Oracle Database Backup and Recovery User's Guide.ExtraArchivedLogDestIds
— (Array<Integer>
)Specifies the IDs of one more destinations for one or more archived redo logs. These IDs are the values of the
dest_id
column in thev$archived_log
view. Use this setting with thearchivedLogDestId
extra connection attribute in a primary-to-single setup or a primary-to-multiple-standby setup.This setting is useful in a switchover when you use an Oracle Data Guard database as a source. In this case, DMS needs information about what destination to get archive redo logs from to read changes. DMS needs this because after the switchover the previous primary is a standby instance. For example, in a primary-to-single standby setup you might apply the following settings.
archivedLogDestId=1; ExtraArchivedLogDestIds=[2]
In a primary-to-multiple-standby setup, you might apply the following settings.
archivedLogDestId=1; ExtraArchivedLogDestIds=[2,3,4]
Although DMS supports the use of the Oracle
RESETLOGS
option to open the database, never useRESETLOGS
unless it's necessary. For more information aboutRESETLOGS
, see RMAN Data Repair Concepts in the Oracle Database Backup and Recovery User's Guide.AllowSelectNestedTables
— (Boolean
)Set this attribute to
true
to enable replication of Oracle tables containing columns that are nested tables or defined types.ParallelAsmReadThreads
— (Integer
)Set this attribute to change the number of threads that DMS configures to perform a change data capture (CDC) load using Oracle Automatic Storage Management (ASM). You can specify an integer value between 2 (the default) and 8 (the maximum). Use this attribute together with the
readAheadBlocks
attribute.ReadAheadBlocks
— (Integer
)Set this attribute to change the number of read-ahead blocks that DMS configures to perform a change data capture (CDC) load using Oracle Automatic Storage Management (ASM). You can specify an integer value between 1000 (the default) and 200,000 (the maximum).
AccessAlternateDirectly
— (Boolean
)Set this attribute to
false
in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This tells the DMS instance to not access redo logs through any specified path prefix replacement using direct file access.UseAlternateFolderForOnline
— (Boolean
)Set this attribute to
true
in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This tells the DMS instance to use any specified prefix replacement to access all online redo logs.OraclePathPrefix
— (String
)Set this string attribute to the required value in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This value specifies the default Oracle root used to access the redo logs.
UsePathPrefix
— (String
)Set this string attribute to the required value in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This value specifies the path prefix used to replace the default Oracle root to access the redo logs.
ReplacePathPrefix
— (Boolean
)Set this attribute to true in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This setting tells DMS instance to replace the default Oracle root with the specified
usePathPrefix
setting to access the redo logs.EnableHomogenousTablespace
— (Boolean
)Set this attribute to enable homogenous tablespace replication and create existing tables or indexes under the same tablespace on the target.
DirectPathNoLog
— (Boolean
)When set to
true
, this attribute helps to increase the commit rate on the Oracle target database by writing directly to tables and not writing a trail to database logs.ArchivedLogsOnly
— (Boolean
)When this field is set to
Y
, DMS only accesses the archived redo logs. If the archived redo logs are stored on Oracle ASM only, the DMS user account needs to be granted ASM privileges.AsmPassword
— (String
)For an Oracle source endpoint, your Oracle Automatic Storage Management (ASM) password. You can set this value from the
asm_user_password
value. You set this value as part of the comma-separated value that you set to thePassword
request parameter when you create the endpoint to access transaction logs using Binary Reader. For more information, see Configuration for change data capture (CDC) on an Oracle source database.AsmServer
— (String
)For an Oracle source endpoint, your ASM server address. You can set this value from the
asm_server
value. You setasm_server
as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database.AsmUser
— (String
)For an Oracle source endpoint, your ASM user name. You can set this value from the
asm_user
value. You setasm_user
as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database.CharLengthSemantics
— (String
)Specifies whether the length of a character column is in bytes or in characters. To indicate that the character column length is in characters, set this attribute to
CHAR
. Otherwise, the character column length is in bytes.Example:
Possible values include:charLengthSemantics=CHAR;
"default"
"char"
"byte"
DatabaseName
— (String
)Database name for the endpoint.
DirectPathParallelLoad
— (Boolean
)When set to
true
, this attribute specifies a parallel load whenuseDirectPathFullLoad
is set toY
. This attribute also only applies when you use the DMS parallel load feature. Note that the target table cannot have any constraints or indexes.FailTasksOnLobTruncation
— (Boolean
)When set to
true
, this attribute causes a task to fail if the actual size of an LOB column is greater than the specifiedLobMaxSize
.If a task is set to limited LOB mode and this option is set to
true
, the task fails instead of truncating the LOB data.NumberDatatypeScale
— (Integer
)Specifies the number scale. You can select a scale up to 38, or you can select FLOAT. By default, the NUMBER data type is converted to precision 38, scale 10.
Example:
numberDataTypeScale=12
Password
— (String
)Endpoint connection password.
Port
— (Integer
)Endpoint TCP port.
ReadTableSpaceName
— (Boolean
)When set to
true
, this attribute supports tablespace replication.RetryInterval
— (Integer
)Specifies the number of seconds that the system waits before resending a query.
Example:
retryInterval=6;
SecurityDbEncryption
— (String
)For an Oracle source endpoint, the transparent data encryption (TDE) password required by AWM DMS to access Oracle redo logs encrypted by TDE using Binary Reader. It is also the
TDE_Password
part of the comma-separated value you set to thePassword
request parameter when you create the endpoint. TheSecurityDbEncryptian
setting is related to thisSecurityDbEncryptionName
setting. For more information, see Supported encryption methods for using Oracle as a source for DMS in the Database Migration Service User Guide.SecurityDbEncryptionName
— (String
)For an Oracle source endpoint, the name of a key used for the transparent data encryption (TDE) of the columns and tablespaces in an Oracle source database that is encrypted using TDE. The key value is the value of the
SecurityDbEncryption
setting. For more information on setting the key name value ofSecurityDbEncryptionName
, see the information and example for setting thesecurityDbEncryptionName
extra connection attribute in Supported encryption methods for using Oracle as a source for DMS in the Database Migration Service User Guide.ServerName
— (String
)Fully qualified domain name of the endpoint.
SpatialDataOptionToGeoJsonFunctionName
— (String
)Use this attribute to convert
SDO_GEOMETRY
toGEOJSON
format. By default, DMS calls theSDO2GEOJSON
custom function if present and accessible. Or you can create your own custom function that mimics the operation ofSDOGEOJSON
and setSpatialDataOptionToGeoJsonFunctionName
to call it instead.StandbyDelayTime
— (Integer
)Use this attribute to specify a time in minutes for the delay in standby sync. If the source is an Oracle Active Data Guard standby database, use this attribute to specify the time lag between primary and standby databases.
In DMS, you can create an Oracle CDC task that uses an Active Data Guard standby instance as a source for replicating ongoing changes. Doing this eliminates the need to connect to an active database that might be in production.
Username
— (String
)Endpoint connection user name.
UseBFile
— (Boolean
)Set this attribute to Y to capture change data using the Binary Reader utility. Set
UseLogminerReader
to N to set this attribute to Y. To use Binary Reader with Amazon RDS for Oracle as the source, you set additional attributes. For more information about using this setting with Oracle Automatic Storage Management (ASM), see Using Oracle LogMiner or DMS Binary Reader for CDC.UseDirectPathFullLoad
— (Boolean
)Set this attribute to Y to have DMS use a direct path full load. Specify this value to use the direct path protocol in the Oracle Call Interface (OCI). By using this OCI protocol, you can bulk-load Oracle target tables during a full load.
UseLogminerReader
— (Boolean
)Set this attribute to Y to capture change data using the Oracle LogMiner utility (the default). Set this attribute to N if you want to access the redo logs as a binary file. When you set
UseLogminerReader
to N, also setUseBfile
to Y. For more information on this setting and using Oracle ASM, see Using Oracle LogMiner or DMS Binary Reader for CDC in the DMS User Guide.SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the Oracle endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the Oracle endpoint connection details.SecretsManagerOracleAsmAccessRoleArn
— (String
)Required only if your Oracle endpoint uses Advanced Storage Manager (ASM). The full ARN of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the
SecretsManagerOracleAsmSecret
. ThisSecretsManagerOracleAsmSecret
has the secret value that allows access to the Oracle ASM of the endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerOracleAsmSecretId
. Or you can specify clear-text values forAsmUserName
,AsmPassword
, andAsmServerName
. You can't specify both. For more information on creating thisSecretsManagerOracleAsmSecret
and theSecretsManagerOracleAsmAccessRoleArn
andSecretsManagerOracleAsmSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerOracleAsmSecretId
— (String
)Required only if your Oracle endpoint uses Advanced Storage Manager (ASM). The full ARN, partial ARN, or friendly name of the
SecretsManagerOracleAsmSecret
that contains the Oracle ASM connection details for the Oracle endpoint.
SybaseSettings
— (map
)The settings for the SAP ASE source and target endpoint. For more information, see the
SybaseSettings
structure.DatabaseName
— (String
)Database name for the endpoint.
Password
— (String
)Endpoint connection password.
Port
— (Integer
)Endpoint TCP port.
ServerName
— (String
)Fully qualified domain name of the endpoint.
Username
— (String
)Endpoint connection user name.
SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the SAP ASE endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the SAP SAE endpoint connection details.
MicrosoftSQLServerSettings
— (map
)The settings for the Microsoft SQL Server source and target endpoint. For more information, see the
MicrosoftSQLServerSettings
structure.Port
— (Integer
)Endpoint TCP port.
BcpPacketSize
— (Integer
)The maximum size of the packets (in bytes) used to transfer data using BCP.
DatabaseName
— (String
)Database name for the endpoint.
ControlTablesFileGroup
— (String
)Specifies a file group for the DMS internal tables. When the replication task starts, all the internal DMS control tables (awsdms_ apply_exception, awsdms_apply, awsdms_changes) are created for the specified file group.
Password
— (String
)Endpoint connection password.
QuerySingleAlwaysOnNode
— (Boolean
)Cleans and recreates table metadata information on the replication instance when a mismatch occurs. An example is a situation where running an alter DDL statement on a table might result in different information about the table cached in the replication instance.
ReadBackupOnly
— (Boolean
)When this attribute is set to
Y
, DMS only reads changes from transaction log backups and doesn't read from the active transaction log file during ongoing replication. Setting this parameter toY
enables you to control active transaction log file growth during full load and ongoing replication tasks. However, it can add some source latency to ongoing replication.SafeguardPolicy
— (String
)Use this attribute to minimize the need to access the backup log and enable DMS to prevent truncation using one of the following two methods.
Start transactions in the database: This is the default method. When this method is used, DMS prevents TLOG truncation by mimicking a transaction in the database. As long as such a transaction is open, changes that appear after the transaction started aren't truncated. If you need Microsoft Replication to be enabled in your database, then you must choose this method.
Exclusively use sp_repldone within a single task: When this method is used, DMS reads the changes and then uses sp_repldone to mark the TLOG transactions as ready for truncation. Although this method doesn't involve any transactional activities, it can only be used when Microsoft Replication isn't running. Also, when using this method, only one DMS task can access the database at any given time. Therefore, if you need to run parallel DMS tasks against the same database, use the default method.
Possible values include:"rely-on-sql-server-replication-agent"
"exclusive-automatic-truncation"
"shared-automatic-truncation"
ServerName
— (String
)Fully qualified domain name of the endpoint.
Username
— (String
)Endpoint connection user name.
UseBcpFullLoad
— (Boolean
)Use this to attribute to transfer data for full-load operations using BCP. When the target table contains an identity column that does not exist in the source table, you must disable the use BCP for loading table option.
UseThirdPartyBackupDevice
— (Boolean
)When this attribute is set to
Y
, DMS processes third-party transaction log backups if they are created in native format.SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the SQL Server endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the SQL Server endpoint connection details.
IBMDb2Settings
— (map
)The settings for the IBM Db2 LUW source endpoint. For more information, see the
IBMDb2Settings
structure.DatabaseName
— (String
)Database name for the endpoint.
Password
— (String
)Endpoint connection password.
Port
— (Integer
)Endpoint TCP port. The default value is 50000.
ServerName
— (String
)Fully qualified domain name of the endpoint.
SetDataCaptureChanges
— (Boolean
)Enables ongoing replication (CDC) as a BOOLEAN value. The default is true.
CurrentLsn
— (String
)For ongoing replication (CDC), use CurrentLSN to specify a log sequence number (LSN) where you want the replication to start.
MaxKBytesPerRead
— (Integer
)Maximum number of bytes per read, as a NUMBER value. The default is 64 KB.
Username
— (String
)Endpoint connection user name.
SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the Db2 LUW endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the Db2 LUW endpoint connection details.
DocDbSettings
— (map
)Provides information that defines a DocumentDB endpoint.
Username
— (String
)The user name you use to access the DocumentDB source endpoint.
Password
— (String
)The password for the user account you use to access the DocumentDB source endpoint.
ServerName
— (String
)The name of the server on the DocumentDB source endpoint.
Port
— (Integer
)The port value for the DocumentDB source endpoint.
DatabaseName
— (String
)The database name on the DocumentDB source endpoint.
NestingLevel
— (String
)Specifies either document or table mode.
Default value is
Possible values include:"none"
. Specify"none"
to use document mode. Specify"one"
to use table mode."none"
"one"
ExtractDocId
— (Boolean
)Specifies the document ID. Use this setting when
NestingLevel
is set to"none"
.Default value is
"false"
.DocsToInvestigate
— (Integer
)Indicates the number of documents to preview to determine the document organization. Use this setting when
NestingLevel
is set to"one"
.Must be a positive value greater than
0
. Default value is1000
.KmsKeyId
— (String
)The KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the
KmsKeyId
parameter, then DMS uses your default encryption key. KMS creates the default encryption key for your Amazon Web Services account. Your Amazon Web Services account has a different default encryption key for each Amazon Web Services Region.SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the DocumentDB endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the DocumentDB endpoint connection details.
RedisSettings
— (map
)The settings for the Redis target endpoint. For more information, see the
RedisSettings
structure.ServerName
— required — (String
)Fully qualified domain name of the endpoint.
Port
— required — (Integer
)Transmission Control Protocol (TCP) port for the endpoint.
SslSecurityProtocol
— (String
)The connection to a Redis target endpoint using Transport Layer Security (TLS). Valid values include
plaintext
andssl-encryption
. The default isssl-encryption
. Thessl-encryption
option makes an encrypted connection. Optionally, you can identify an Amazon Resource Name (ARN) for an SSL certificate authority (CA) using theSslCaCertificateArn
setting. If an ARN isn't given for a CA, DMS uses the Amazon root CA.The
Possible values include:plaintext
option doesn't provide Transport Layer Security (TLS) encryption for traffic between endpoint and database."plaintext"
"ssl-encryption"
AuthType
— (String
)The type of authentication to perform when connecting to a Redis target. Options include
Possible values include:none
,auth-token
, andauth-role
. Theauth-token
option requires anAuthPassword
value to be provided. Theauth-role
option requiresAuthUserName
andAuthPassword
values to be provided."none"
"auth-role"
"auth-token"
AuthUserName
— (String
)The user name provided with the
auth-role
option of theAuthType
setting for a Redis target endpoint.AuthPassword
— (String
)The password provided with the
auth-role
andauth-token
options of theAuthType
setting for a Redis target endpoint.SslCaCertificateArn
— (String
)The Amazon Resource Name (ARN) for the certificate authority (CA) that DMS uses to connect to your Redis target endpoint.
-
(AWS.Response)
—
Returns:
createEventSubscription(params = {}, callback) ⇒ AWS.Request
Creates an DMS event notification subscription.
You can specify the type of source (
SourceType
) you want to be notified of, provide a list of DMS source IDs (SourceIds
) that triggers the events, and provide a list of event categories (EventCategories
) for events you want to be notified of. If you specify both theSourceType
andSourceIds
, such asSourceType = replication-instance
andSourceIdentifier = my-replinstance
, you will be notified of all the replication instance events for the specified source. If you specify aSourceType
but don't specify aSourceIdentifier
, you receive notice of the events for that source type for all your DMS sources. If you don't specify eitherSourceType
norSourceIdentifier
, you will be notified of events generated from all DMS sources belonging to your customer account.For more information about DMS events, see Working with Events and Notifications in the Database Migration Service User Guide.
Service Reference:
Examples:
Calling the createEventSubscription operation
var params = { SnsTopicArn: 'STRING_VALUE', /* required */ SubscriptionName: 'STRING_VALUE', /* required */ Enabled: true || false, EventCategories: [ 'STRING_VALUE', /* more items */ ], SourceIds: [ 'STRING_VALUE', /* more items */ ], SourceType: 'STRING_VALUE', Tags: [ { Key: 'STRING_VALUE', ResourceArn: 'STRING_VALUE', Value: 'STRING_VALUE' }, /* more items */ ] }; dms.createEventSubscription(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
SubscriptionName
— (String
)The name of the DMS event notification subscription. This name must be less than 255 characters.
SnsTopicArn
— (String
)The Amazon Resource Name (ARN) of the Amazon SNS topic created for event notification. The ARN is created by Amazon SNS when you create a topic and subscribe to it.
SourceType
— (String
)The type of DMS resource that generates the events. For example, if you want to be notified of events generated by a replication instance, you set this parameter to
replication-instance
. If this value isn't specified, all events are returned.Valid values:
replication-instance
|replication-task
EventCategories
— (Array<String>
)A list of event categories for a source type that you want to subscribe to. For more information, see Working with Events and Notifications in the Database Migration Service User Guide.
SourceIds
— (Array<String>
)A list of identifiers for which DMS provides notification events.
If you don't specify a value, notifications are provided for all sources.
If you specify multiple values, they must be of the same type. For example, if you specify a database instance ID, then all of the other values must be database instance IDs.
Enabled
— (Boolean
)A Boolean value; set to
true
to activate the subscription, or set tofalse
to create the subscription but not activate it.Tags
— (Array<map>
)One or more tags to be assigned to the event subscription.
Key
— (String
)A key is the required name of the tag. The string value can be 1-128 Unicode characters in length and can't be prefixed with "aws:" or "dms:". The string can only contain only the set of Unicode letters, digits, white-space, '', '.', '/', '=', '+', '-' (Java regular expressions: "^([\p
{L}\\p{Z}\\p{N}
.:/=+\-]*)$").Value
— (String
)A value is the optional value of the tag. The string value can be 1-256 Unicode characters in length and can't be prefixed with "aws:" or "dms:". The string can only contain only the set of Unicode letters, digits, white-space, '', '.', '/', '=', '+', '-' (Java regular expressions: "^([\p
{L}\\p{Z}\\p{N}
.:/=+\-]*)$").ResourceArn
— (String
)The Amazon Resource Name (ARN) string that uniquely identifies the resource for which the tag is created.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:EventSubscription
— (map
)The event subscription that was created.
CustomerAwsId
— (String
)The Amazon Web Services customer account associated with the DMS event notification subscription.
CustSubscriptionId
— (String
)The DMS event notification subscription Id.
SnsTopicArn
— (String
)The topic ARN of the DMS event notification subscription.
Status
— (String
)The status of the DMS event notification subscription.
Constraints:
Can be one of the following: creating | modifying | deleting | active | no-permission | topic-not-exist
The status "no-permission" indicates that DMS no longer has permission to post to the SNS topic. The status "topic-not-exist" indicates that the topic was deleted after the subscription was created.
SubscriptionCreationTime
— (String
)The time the DMS event notification subscription was created.
SourceType
— (String
)The type of DMS resource that generates events.
Valid values: replication-instance | replication-server | security-group | replication-task
SourceIdsList
— (Array<String>
)A list of source Ids for the event subscription.
EventCategoriesList
— (Array<String>
)A lists of event categories.
Enabled
— (Boolean
)Boolean value that indicates if the event subscription is enabled.
-
(AWS.Response)
—
Returns:
createReplicationInstance(params = {}, callback) ⇒ AWS.Request
Creates the replication instance using the specified parameters.
DMS requires that your account have certain roles with appropriate permissions before you can create a replication instance. For information on the required roles, see Creating the IAM Roles to Use With the CLI and DMS API. For information on the required permissions, see IAM Permissions Needed to Use DMS.
Service Reference:
Examples:
Create replication instance
/* Creates the replication instance using the specified parameters. */ var params = { AllocatedStorage: 123, AutoMinorVersionUpgrade: true, AvailabilityZone: "", EngineVersion: "", KmsKeyId: "", MultiAZ: true, PreferredMaintenanceWindow: "", PubliclyAccessible: true, ReplicationInstanceClass: "", ReplicationInstanceIdentifier: "", ReplicationSubnetGroupIdentifier: "", Tags: [ { Key: "string", Value: "string" } ], VpcSecurityGroupIds: [ ] }; dms.createReplicationInstance(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response /* data = { ReplicationInstance: { AllocatedStorage: 5, AutoMinorVersionUpgrade: true, EngineVersion: "1.5.0", KmsKeyId: "arn:aws:kms:us-east-1:123456789012:key/4c1731d6-5435-ed4d-be13-d53411a7cfbd", PendingModifiedValues: { }, PreferredMaintenanceWindow: "sun:06:00-sun:14:00", PubliclyAccessible: true, ReplicationInstanceArn: "arn:aws:dms:us-east-1:123456789012:rep:6UTDJGBOUS3VI3SUWA66XFJCJQ", ReplicationInstanceClass: "dms.t2.micro", ReplicationInstanceIdentifier: "test-rep-1", ReplicationInstanceStatus: "creating", ReplicationSubnetGroup: { ReplicationSubnetGroupDescription: "default", ReplicationSubnetGroupIdentifier: "default", SubnetGroupStatus: "Complete", Subnets: [ { SubnetAvailabilityZone: { Name: "us-east-1d" }, SubnetIdentifier: "subnet-f6dd91af", SubnetStatus: "Active" }, { SubnetAvailabilityZone: { Name: "us-east-1b" }, SubnetIdentifier: "subnet-3605751d", SubnetStatus: "Active" }, { SubnetAvailabilityZone: { Name: "us-east-1c" }, SubnetIdentifier: "subnet-c2daefb5", SubnetStatus: "Active" }, { SubnetAvailabilityZone: { Name: "us-east-1e" }, SubnetIdentifier: "subnet-85e90cb8", SubnetStatus: "Active" } ], VpcId: "vpc-6741a603" } } } */ });
Calling the createReplicationInstance operation
var params = { ReplicationInstanceClass: 'STRING_VALUE', /* required */ ReplicationInstanceIdentifier: 'STRING_VALUE', /* required */ AllocatedStorage: 'NUMBER_VALUE', AutoMinorVersionUpgrade: true || false, AvailabilityZone: 'STRING_VALUE', DnsNameServers: 'STRING_VALUE', EngineVersion: 'STRING_VALUE', KmsKeyId: 'STRING_VALUE', MultiAZ: true || false, PreferredMaintenanceWindow: 'STRING_VALUE', PubliclyAccessible: true || false, ReplicationSubnetGroupIdentifier: 'STRING_VALUE', ResourceIdentifier: 'STRING_VALUE', Tags: [ { Key: 'STRING_VALUE', ResourceArn: 'STRING_VALUE', Value: 'STRING_VALUE' }, /* more items */ ], VpcSecurityGroupIds: [ 'STRING_VALUE', /* more items */ ] }; dms.createReplicationInstance(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
ReplicationInstanceIdentifier
— (String
)The replication instance identifier. This parameter is stored as a lowercase string.
Constraints:
-
Must contain 1-63 alphanumeric characters or hyphens.
-
First character must be a letter.
-
Can't end with a hyphen or contain two consecutive hyphens.
Example:
myrepinstance
-
AllocatedStorage
— (Integer
)The amount of storage (in gigabytes) to be initially allocated for the replication instance.
ReplicationInstanceClass
— (String
)The compute and memory capacity of the replication instance as defined for the specified replication instance class. For example to specify the instance class dms.c4.large, set this parameter to
"dms.c4.large"
.For more information on the settings and capacities for the available replication instance classes, see Selecting the right DMS replication instance for your migration.
VpcSecurityGroupIds
— (Array<String>
)Specifies the VPC security group to be used with the replication instance. The VPC security group must work with the VPC containing the replication instance.
AvailabilityZone
— (String
)The Availability Zone where the replication instance will be created. The default value is a random, system-chosen Availability Zone in the endpoint's Amazon Web Services Region, for example:
us-east-1d
ReplicationSubnetGroupIdentifier
— (String
)A subnet group to associate with the replication instance.
PreferredMaintenanceWindow
— (String
)The weekly time range during which system maintenance can occur, in Universal Coordinated Time (UTC).
Format:
ddd:hh24:mi-ddd:hh24:mi
Default: A 30-minute window selected at random from an 8-hour block of time per Amazon Web Services Region, occurring on a random day of the week.
Valid Days: Mon, Tue, Wed, Thu, Fri, Sat, Sun
Constraints: Minimum 30-minute window.
MultiAZ
— (Boolean
)Specifies whether the replication instance is a Multi-AZ deployment. You can't set the
AvailabilityZone
parameter if the Multi-AZ parameter is set totrue
.EngineVersion
— (String
)The engine version number of the replication instance.
If an engine version number is not specified when a replication instance is created, the default is the latest engine version available.
AutoMinorVersionUpgrade
— (Boolean
)A value that indicates whether minor engine upgrades are applied automatically to the replication instance during the maintenance window. This parameter defaults to
true
.Default:
true
Tags
— (Array<map>
)One or more tags to be assigned to the replication instance.
Key
— (String
)A key is the required name of the tag. The string value can be 1-128 Unicode characters in length and can't be prefixed with "aws:" or "dms:". The string can only contain only the set of Unicode letters, digits, white-space, '', '.', '/', '=', '+', '-' (Java regular expressions: "^([\p
{L}\\p{Z}\\p{N}
.:/=+\-]*)$").Value
— (String
)A value is the optional value of the tag. The string value can be 1-256 Unicode characters in length and can't be prefixed with "aws:" or "dms:". The string can only contain only the set of Unicode letters, digits, white-space, '', '.', '/', '=', '+', '-' (Java regular expressions: "^([\p
{L}\\p{Z}\\p{N}
.:/=+\-]*)$").ResourceArn
— (String
)The Amazon Resource Name (ARN) string that uniquely identifies the resource for which the tag is created.
KmsKeyId
— (String
)An KMS key identifier that is used to encrypt the data on the replication instance.
If you don't specify a value for the
KmsKeyId
parameter, then DMS uses your default encryption key.KMS creates the default encryption key for your Amazon Web Services account. Your Amazon Web Services account has a different default encryption key for each Amazon Web Services Region.
PubliclyAccessible
— (Boolean
)Specifies the accessibility options for the replication instance. A value of
true
represents an instance with a public IP address. A value offalse
represents an instance with a private IP address. The default value istrue
.DnsNameServers
— (String
)A list of custom DNS name servers supported for the replication instance to access your on-premise source or target database. This list overrides the default name servers supported by the replication instance. You can specify a comma-separated list of internet addresses for up to four on-premise DNS name servers. For example:
"1.1.1.1,2.2.2.2,3.3.3.3,4.4.4.4"
ResourceIdentifier
— (String
)A friendly name for the resource identifier at the end of the
EndpointArn
response parameter that is returned in the createdEndpoint
object. The value for this parameter can have up to 31 characters. It can contain only ASCII letters, digits, and hyphen ('-'). Also, it can't end with a hyphen or contain two consecutive hyphens, and can only begin with a letter, such asExample-App-ARN1
. For example, this value might result in theEndpointArn
valuearn:aws:dms:eu-west-1:012345678901:rep:Example-App-ARN1
. If you don't specify aResourceIdentifier
value, DMS generates a default identifier value for the end ofEndpointArn
.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:ReplicationInstance
— (map
)The replication instance that was created.
ReplicationInstanceIdentifier
— (String
)The replication instance identifier is a required parameter. This parameter is stored as a lowercase string.
Constraints:
-
Must contain 1-63 alphanumeric characters or hyphens.
-
First character must be a letter.
-
Cannot end with a hyphen or contain two consecutive hyphens.
Example:
myrepinstance
-
ReplicationInstanceClass
— (String
)The compute and memory capacity of the replication instance as defined for the specified replication instance class. It is a required parameter, although a default value is pre-selected in the DMS console.
For more information on the settings and capacities for the available replication instance classes, see Selecting the right DMS replication instance for your migration.
ReplicationInstanceStatus
— (String
)The status of the replication instance. The possible return values include:
-
"available"
-
"creating"
-
"deleted"
-
"deleting"
-
"failed"
-
"modifying"
-
"upgrading"
-
"rebooting"
-
"resetting-master-credentials"
-
"storage-full"
-
"incompatible-credentials"
-
"incompatible-network"
-
"maintenance"
-
AllocatedStorage
— (Integer
)The amount of storage (in gigabytes) that is allocated for the replication instance.
InstanceCreateTime
— (Date
)The time the replication instance was created.
VpcSecurityGroups
— (Array<map>
)The VPC security group for the instance.
VpcSecurityGroupId
— (String
)The VPC security group ID.
Status
— (String
)The status of the VPC security group.
AvailabilityZone
— (String
)The Availability Zone for the instance.
ReplicationSubnetGroup
— (map
)The subnet group for the replication instance.
ReplicationSubnetGroupIdentifier
— (String
)The identifier of the replication instance subnet group.
ReplicationSubnetGroupDescription
— (String
)A description for the replication subnet group.
VpcId
— (String
)The ID of the VPC.
SubnetGroupStatus
— (String
)The status of the subnet group.
Subnets
— (Array<map>
)The subnets that are in the subnet group.
SubnetIdentifier
— (String
)The subnet identifier.
SubnetAvailabilityZone
— (map
)The Availability Zone of the subnet.
Name
— (String
)The name of the Availability Zone.
SubnetStatus
— (String
)The status of the subnet.
PreferredMaintenanceWindow
— (String
)The maintenance window times for the replication instance. Any pending upgrades to the replication instance are performed during this time.
PendingModifiedValues
— (map
)The pending modification values.
ReplicationInstanceClass
— (String
)The compute and memory capacity of the replication instance as defined for the specified replication instance class.
For more information on the settings and capacities for the available replication instance classes, see Selecting the right DMS replication instance for your migration.
AllocatedStorage
— (Integer
)The amount of storage (in gigabytes) that is allocated for the replication instance.
MultiAZ
— (Boolean
)Specifies whether the replication instance is a Multi-AZ deployment. You can't set the
AvailabilityZone
parameter if the Multi-AZ parameter is set totrue
.EngineVersion
— (String
)The engine version number of the replication instance.
MultiAZ
— (Boolean
)Specifies whether the replication instance is a Multi-AZ deployment. You can't set the
AvailabilityZone
parameter if the Multi-AZ parameter is set totrue
.EngineVersion
— (String
)The engine version number of the replication instance.
If an engine version number is not specified when a replication instance is created, the default is the latest engine version available.
When modifying a major engine version of an instance, also set
AllowMajorVersionUpgrade
totrue
.AutoMinorVersionUpgrade
— (Boolean
)Boolean value indicating if minor version upgrades will be automatically applied to the instance.
KmsKeyId
— (String
)An KMS key identifier that is used to encrypt the data on the replication instance.
If you don't specify a value for the
KmsKeyId
parameter, then DMS uses your default encryption key.KMS creates the default encryption key for your Amazon Web Services account. Your Amazon Web Services account has a different default encryption key for each Amazon Web Services Region.
ReplicationInstanceArn
— (String
)The Amazon Resource Name (ARN) of the replication instance.
ReplicationInstancePublicIpAddress
— (String
)The public IP address of the replication instance.
ReplicationInstancePrivateIpAddress
— (String
)The private IP address of the replication instance.
ReplicationInstancePublicIpAddresses
— (Array<String>
)One or more public IP addresses for the replication instance.
ReplicationInstancePrivateIpAddresses
— (Array<String>
)One or more private IP addresses for the replication instance.
PubliclyAccessible
— (Boolean
)Specifies the accessibility options for the replication instance. A value of
true
represents an instance with a public IP address. A value offalse
represents an instance with a private IP address. The default value istrue
.SecondaryAvailabilityZone
— (String
)The Availability Zone of the standby replication instance in a Multi-AZ deployment.
FreeUntil
— (Date
)The expiration date of the free replication instance that is part of the Free DMS program.
DnsNameServers
— (String
)The DNS name servers supported for the replication instance to access your on-premise source or target database.
-
(AWS.Response)
—
Returns:
createReplicationSubnetGroup(params = {}, callback) ⇒ AWS.Request
Creates a replication subnet group given a list of the subnet IDs in a VPC.
The VPC needs to have at least one subnet in at least two availability zones in the Amazon Web Services Region, otherwise the service will throw a
ReplicationSubnetGroupDoesNotCoverEnoughAZs
exception.Service Reference:
Examples:
Create replication subnet group
/* Creates a replication subnet group given a list of the subnet IDs in a VPC. */ var params = { ReplicationSubnetGroupDescription: "US West subnet group", ReplicationSubnetGroupIdentifier: "us-west-2ab-vpc-215ds366", SubnetIds: [ "subnet-e145356n", "subnet-58f79200" ], Tags: [ { Key: "Acount", Value: "145235" } ] }; dms.createReplicationSubnetGroup(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response /* data = { ReplicationSubnetGroup: { } } */ });
Calling the createReplicationSubnetGroup operation
var params = { ReplicationSubnetGroupDescription: 'STRING_VALUE', /* required */ ReplicationSubnetGroupIdentifier: 'STRING_VALUE', /* required */ SubnetIds: [ /* required */ 'STRING_VALUE', /* more items */ ], Tags: [ { Key: 'STRING_VALUE', ResourceArn: 'STRING_VALUE', Value: 'STRING_VALUE' }, /* more items */ ] }; dms.createReplicationSubnetGroup(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
ReplicationSubnetGroupIdentifier
— (String
)The name for the replication subnet group. This value is stored as a lowercase string.
Constraints: Must contain no more than 255 alphanumeric characters, periods, spaces, underscores, or hyphens. Must not be "default".
Example:
mySubnetgroup
ReplicationSubnetGroupDescription
— (String
)The description for the subnet group.
SubnetIds
— (Array<String>
)One or more subnet IDs to be assigned to the subnet group.
Tags
— (Array<map>
)One or more tags to be assigned to the subnet group.
Key
— (String
)A key is the required name of the tag. The string value can be 1-128 Unicode characters in length and can't be prefixed with "aws:" or "dms:". The string can only contain only the set of Unicode letters, digits, white-space, '', '.', '/', '=', '+', '-' (Java regular expressions: "^([\p
{L}\\p{Z}\\p{N}
.:/=+\-]*)$").Value
— (String
)A value is the optional value of the tag. The string value can be 1-256 Unicode characters in length and can't be prefixed with "aws:" or "dms:". The string can only contain only the set of Unicode letters, digits, white-space, '', '.', '/', '=', '+', '-' (Java regular expressions: "^([\p
{L}\\p{Z}\\p{N}
.:/=+\-]*)$").ResourceArn
— (String
)The Amazon Resource Name (ARN) string that uniquely identifies the resource for which the tag is created.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:ReplicationSubnetGroup
— (map
)The replication subnet group that was created.
ReplicationSubnetGroupIdentifier
— (String
)The identifier of the replication instance subnet group.
ReplicationSubnetGroupDescription
— (String
)A description for the replication subnet group.
VpcId
— (String
)The ID of the VPC.
SubnetGroupStatus
— (String
)The status of the subnet group.
Subnets
— (Array<map>
)The subnets that are in the subnet group.
SubnetIdentifier
— (String
)The subnet identifier.
SubnetAvailabilityZone
— (map
)The Availability Zone of the subnet.
Name
— (String
)The name of the Availability Zone.
SubnetStatus
— (String
)The status of the subnet.
-
(AWS.Response)
—
Returns:
createReplicationTask(params = {}, callback) ⇒ AWS.Request
Creates a replication task using the specified parameters.
Service Reference:
Examples:
Create replication task
/* Creates a replication task using the specified parameters. */ var params = { CdcStartTime: <Date Representation>, MigrationType: "full-load", ReplicationInstanceArn: "arn:aws:dms:us-east-1:123456789012:rep:6UTDJGBOUS3VI3SUWA66XFJCJQ", ReplicationTaskIdentifier: "task1", ReplicationTaskSettings: "", SourceEndpointArn: "arn:aws:dms:us-east-1:123456789012:endpoint:ZW5UAN6P4E77EC7YWHK4RZZ3BE", TableMappings: "file://mappingfile.json", Tags: [ { Key: "Acount", Value: "24352226" } ], TargetEndpointArn: "arn:aws:dms:us-east-1:123456789012:endpoint:ASXWXJZLNWNT5HTWCGV2BUJQ7E" }; dms.createReplicationTask(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response /* data = { ReplicationTask: { MigrationType: "full-load", ReplicationInstanceArn: "arn:aws:dms:us-east-1:123456789012:rep:6UTDJGBOUS3VI3SUWA66XFJCJQ", ReplicationTaskArn: "arn:aws:dms:us-east-1:123456789012:task:OEAMB3NXSTZ6LFYZFEPPBBXPYM", ReplicationTaskCreationDate: <Date Representation>, ReplicationTaskIdentifier: "task1", ReplicationTaskSettings: "{\"TargetMetadata\":{\"TargetSchema\":\"\",\"SupportLobs\":true,\"FullLobMode\":true,\"LobChunkSize\":64,\"LimitedSizeLobMode\":false,\"LobMaxSize\":0},\"FullLoadSettings\":{\"FullLoadEnabled\":true,\"ApplyChangesEnabled\":false,\"TargetTablePrepMode\":\"DROP_AND_CREATE\",\"CreatePkAfterFullLoad\":false,\"StopTaskCachedChangesApplied\":false,\"StopTaskCachedChangesNotApplied\":false,\"ResumeEnabled\":false,\"ResumeMinTableSize\":100000,\"ResumeOnlyClusteredPKTables\":true,\"MaxFullLoadSubTasks\":8,\"TransactionConsistencyTimeout\":600,\"CommitRate\":10000},\"Logging\":{\"EnableLogging\":false}}", SourceEndpointArn: "arn:aws:dms:us-east-1:123456789012:endpoint:ZW5UAN6P4E77EC7YWHK4RZZ3BE", Status: "creating", TableMappings: "file://mappingfile.json", TargetEndpointArn: "arn:aws:dms:us-east-1:123456789012:endpoint:ASXWXJZLNWNT5HTWCGV2BUJQ7E" } } */ });
Calling the createReplicationTask operation
var params = { MigrationType: full-load | cdc | full-load-and-cdc, /* required */ ReplicationInstanceArn: 'STRING_VALUE', /* required */ ReplicationTaskIdentifier: 'STRING_VALUE', /* required */ SourceEndpointArn: 'STRING_VALUE', /* required */ TableMappings: 'STRING_VALUE', /* required */ TargetEndpointArn: 'STRING_VALUE', /* required */ CdcStartPosition: 'STRING_VALUE', CdcStartTime: new Date || 'Wed Dec 31 1969 16:00:00 GMT-0800 (PST)' || 123456789, CdcStopPosition: 'STRING_VALUE', ReplicationTaskSettings: 'STRING_VALUE', ResourceIdentifier: 'STRING_VALUE', Tags: [ { Key: 'STRING_VALUE', ResourceArn: 'STRING_VALUE', Value: 'STRING_VALUE' }, /* more items */ ], TaskData: 'STRING_VALUE' }; dms.createReplicationTask(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
ReplicationTaskIdentifier
— (String
)An identifier for the replication task.
Constraints:
-
Must contain 1-255 alphanumeric characters or hyphens.
-
First character must be a letter.
-
Cannot end with a hyphen or contain two consecutive hyphens.
-
SourceEndpointArn
— (String
)An Amazon Resource Name (ARN) that uniquely identifies the source endpoint.
TargetEndpointArn
— (String
)An Amazon Resource Name (ARN) that uniquely identifies the target endpoint.
ReplicationInstanceArn
— (String
)The Amazon Resource Name (ARN) of a replication instance.
MigrationType
— (String
)The migration type. Valid values:
Possible values include:full-load
|cdc
|full-load-and-cdc
"full-load"
"cdc"
"full-load-and-cdc"
TableMappings
— (String
)The table mappings for the task, in JSON format. For more information, see Using Table Mapping to Specify Task Settings in the Database Migration Service User Guide.
ReplicationTaskSettings
— (String
)Overall settings for the task, in JSON format. For more information, see Specifying Task Settings for Database Migration Service Tasks in the Database Migration Service User Guide.
CdcStartTime
— (Date
)Indicates the start time for a change data capture (CDC) operation. Use either CdcStartTime or CdcStartPosition to specify when you want a CDC operation to start. Specifying both values results in an error.
Timestamp Example: --cdc-start-time “2018-03-08T12:12:12”
CdcStartPosition
— (String
)Indicates when you want a change data capture (CDC) operation to start. Use either CdcStartPosition or CdcStartTime to specify when you want a CDC operation to start. Specifying both values results in an error.
The value can be in date, checkpoint, or LSN/SCN format.
Date Example: --cdc-start-position “2018-03-08T12:12:12”
Checkpoint Example: --cdc-start-position "checkpoint:V1#27#mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:1876#0#0#*#0#93"
LSN Example: --cdc-start-position “mysql-bin-changelog.000024:373”
Note: When you use this task setting with a source PostgreSQL database, a logical replication slot should already be created and associated with the source endpoint. You can verify this by setting theslotName
extra connection attribute to the name of this logical replication slot. For more information, see Extra Connection Attributes When Using PostgreSQL as a Source for DMS.CdcStopPosition
— (String
)Indicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.
Server time example: --cdc-stop-position “server_time:2018-02-09T12:12:12”
Commit time example: --cdc-stop-position “commit_time: 2018-02-09T12:12:12 “
Tags
— (Array<map>
)One or more tags to be assigned to the replication task.
Key
— (String
)A key is the required name of the tag. The string value can be 1-128 Unicode characters in length and can't be prefixed with "aws:" or "dms:". The string can only contain only the set of Unicode letters, digits, white-space, '', '.', '/', '=', '+', '-' (Java regular expressions: "^([\p
{L}\\p{Z}\\p{N}
.:/=+\-]*)$").Value
— (String
)A value is the optional value of the tag. The string value can be 1-256 Unicode characters in length and can't be prefixed with "aws:" or "dms:". The string can only contain only the set of Unicode letters, digits, white-space, '', '.', '/', '=', '+', '-' (Java regular expressions: "^([\p
{L}\\p{Z}\\p{N}
.:/=+\-]*)$").ResourceArn
— (String
)The Amazon Resource Name (ARN) string that uniquely identifies the resource for which the tag is created.
TaskData
— (String
)Supplemental information that the task requires to migrate the data for certain source and target endpoints. For more information, see Specifying Supplemental Data for Task Settings in the Database Migration Service User Guide.
ResourceIdentifier
— (String
)A friendly name for the resource identifier at the end of the
EndpointArn
response parameter that is returned in the createdEndpoint
object. The value for this parameter can have up to 31 characters. It can contain only ASCII letters, digits, and hyphen ('-'). Also, it can't end with a hyphen or contain two consecutive hyphens, and can only begin with a letter, such asExample-App-ARN1
. For example, this value might result in theEndpointArn
valuearn:aws:dms:eu-west-1:012345678901:rep:Example-App-ARN1
. If you don't specify aResourceIdentifier
value, DMS generates a default identifier value for the end ofEndpointArn
.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:ReplicationTask
— (map
)The replication task that was created.
ReplicationTaskIdentifier
— (String
)The user-assigned replication task identifier or name.
Constraints:
-
Must contain 1-255 alphanumeric characters or hyphens.
-
First character must be a letter.
-
Cannot end with a hyphen or contain two consecutive hyphens.
-
SourceEndpointArn
— (String
)The Amazon Resource Name (ARN) that uniquely identifies the endpoint.
TargetEndpointArn
— (String
)The ARN that uniquely identifies the endpoint.
ReplicationInstanceArn
— (String
)The ARN of the replication instance.
MigrationType
— (String
)The type of migration.
Possible values include:"full-load"
"cdc"
"full-load-and-cdc"
TableMappings
— (String
)Table mappings specified in the task.
ReplicationTaskSettings
— (String
)The settings for the replication task.
Status
— (String
)The status of the replication task. This response parameter can return one of the following values:
-
"moving"
– The task is being moved in response to running theMoveReplicationTask
operation. -
"creating"
– The task is being created in response to running theCreateReplicationTask
operation. -
"deleting"
– The task is being deleted in response to running theDeleteReplicationTask
operation. -
"failed"
– The task failed to successfully complete the database migration in response to running theStartReplicationTask
operation. -
"failed-move"
– The task failed to move in response to running theMoveReplicationTask
operation. -
"modifying"
– The task definition is being modified in response to running theModifyReplicationTask
operation. -
"ready"
– The task is in aready
state where it can respond to other task operations, such asStartReplicationTask
orDeleteReplicationTask
. -
"running"
– The task is performing a database migration in response to running theStartReplicationTask
operation. -
"starting"
– The task is preparing to perform a database migration in response to running theStartReplicationTask
operation. -
"stopped"
– The task has stopped in response to running theStopReplicationTask
operation. -
"stopping"
– The task is preparing to stop in response to running theStopReplicationTask
operation. -
"testing"
– The database migration specified for this task is being tested in response to running either theStartReplicationTaskAssessmentRun
or theStartReplicationTaskAssessment
operation.Note:StartReplicationTaskAssessmentRun
is an improved premigration task assessment operation. TheStartReplicationTaskAssessment
operation assesses data type compatibility only between the source and target database of a given migration task. In contrast,StartReplicationTaskAssessmentRun
enables you to specify a variety of premigration task assessments in addition to data type compatibility. These assessments include ones for the validity of primary key definitions and likely issues with database migration performance, among others.
-
LastFailureMessage
— (String
)The last error (failure) message generated for the replication task.
StopReason
— (String
)The reason the replication task was stopped. This response parameter can return one of the following values:
-
"STOP_REASON_FULL_LOAD_COMPLETED"
– Full-load migration completed. -
"STOP_REASON_CACHED_CHANGES_APPLIED"
– Change data capture (CDC) load completed. -
"STOP_REASON_CACHED_CHANGES_NOT_APPLIED"
– In a full-load and CDC migration, the full load stopped as specified before starting the CDC migration. -
"STOP_REASON_SERVER_TIME"
– The migration stopped at the specified server time.
-
ReplicationTaskCreationDate
— (Date
)The date the replication task was created.
ReplicationTaskStartDate
— (Date
)The date the replication task is scheduled to start.
CdcStartPosition
— (String
)Indicates when you want a change data capture (CDC) operation to start. Use either
CdcStartPosition
orCdcStartTime
to specify when you want the CDC operation to start. Specifying both values results in an error.The value can be in date, checkpoint, or LSN/SCN format.
Date Example: --cdc-start-position “2018-03-08T12:12:12”
Checkpoint Example: --cdc-start-position "checkpoint:V1#27#mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:1876#0#0#*#0#93"
LSN Example: --cdc-start-position “mysql-bin-changelog.000024:373”
CdcStopPosition
— (String
)Indicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.
Server time example: --cdc-stop-position “server_time:2018-02-09T12:12:12”
Commit time example: --cdc-stop-position “commit_time: 2018-02-09T12:12:12 “
RecoveryCheckpoint
— (String
)Indicates the last checkpoint that occurred during a change data capture (CDC) operation. You can provide this value to the
CdcStartPosition
parameter to start a CDC operation that begins at that checkpoint.ReplicationTaskArn
— (String
)The Amazon Resource Name (ARN) of the replication task.
ReplicationTaskStats
— (map
)The statistics for the task, including elapsed time, tables loaded, and table errors.
FullLoadProgressPercent
— (Integer
)The percent complete for the full load migration task.
ElapsedTimeMillis
— (Integer
)The elapsed time of the task, in milliseconds.
TablesLoaded
— (Integer
)The number of tables loaded for this task.
TablesLoading
— (Integer
)The number of tables currently loading for this task.
TablesQueued
— (Integer
)The number of tables queued for this task.
TablesErrored
— (Integer
)The number of errors that have occurred during this task.
FreshStartDate
— (Date
)The date the replication task was started either with a fresh start or a target reload.
StartDate
— (Date
)The date the replication task was started either with a fresh start or a resume. For more information, see StartReplicationTaskType.
StopDate
— (Date
)The date the replication task was stopped.
FullLoadStartDate
— (Date
)The date the replication task full load was started.
FullLoadFinishDate
— (Date
)The date the replication task full load was completed.
TaskData
— (String
)Supplemental information that the task requires to migrate the data for certain source and target endpoints. For more information, see Specifying Supplemental Data for Task Settings in the Database Migration Service User Guide.
TargetReplicationInstanceArn
— (String
)The ARN of the replication instance to which this task is moved in response to running the
MoveReplicationTask
operation. Otherwise, this response parameter isn't a member of theReplicationTask
object.
-
(AWS.Response)
—
Returns:
deleteCertificate(params = {}, callback) ⇒ AWS.Request
Deletes the specified certificate.
Service Reference:
Examples:
Delete Certificate
/* Deletes the specified certificate. */ var params = { CertificateArn: "arn:aws:dms:us-east-1:123456789012:rep:6UTDJGBOUSM457DE6XFJCJQ" }; dms.deleteCertificate(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response /* data = { Certificate: { } } */ });
Calling the deleteCertificate operation
var params = { CertificateArn: 'STRING_VALUE' /* required */ }; dms.deleteCertificate(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
CertificateArn
— (String
)The Amazon Resource Name (ARN) of the deleted certificate.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:Certificate
— (map
)The Secure Sockets Layer (SSL) certificate.
CertificateIdentifier
— (String
)A customer-assigned name for the certificate. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens.
CertificateCreationDate
— (Date
)The date that the certificate was created.
CertificatePem
— (String
)The contents of a
.pem
file, which contains an X.509 certificate.CertificateWallet
— (Buffer, Typed Array, Blob, String
)The location of an imported Oracle Wallet certificate for use with SSL.
CertificateArn
— (String
)The Amazon Resource Name (ARN) for the certificate.
CertificateOwner
— (String
)The owner of the certificate.
ValidFromDate
— (Date
)The beginning date that the certificate is valid.
ValidToDate
— (Date
)The final date that the certificate is valid.
SigningAlgorithm
— (String
)The signing algorithm for the certificate.
KeyLength
— (Integer
)The key length of the cryptographic algorithm being used.
-
(AWS.Response)
—
Returns:
deleteConnection(params = {}, callback) ⇒ AWS.Request
Deletes the connection between a replication instance and an endpoint.
Service Reference:
Examples:
Delete Connection
/* Deletes the connection between the replication instance and the endpoint. */ var params = { EndpointArn: "arn:aws:dms:us-east-1:123456789012:endpoint:RAAR3R22XSH46S3PWLC3NJAWKM", ReplicationInstanceArn: "arn:aws:dms:us-east-1:123456789012:rep:6UTDJGBOUS3VI3SUWA66XFJCJQ" }; dms.deleteConnection(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response /* data = { Connection: { } } */ });
Calling the deleteConnection operation
var params = { EndpointArn: 'STRING_VALUE', /* required */ ReplicationInstanceArn: 'STRING_VALUE' /* required */ }; dms.deleteConnection(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
EndpointArn
— (String
)The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
ReplicationInstanceArn
— (String
)The Amazon Resource Name (ARN) of the replication instance.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:Connection
— (map
)The connection that is being deleted.
ReplicationInstanceArn
— (String
)The ARN of the replication instance.
EndpointArn
— (String
)The ARN string that uniquely identifies the endpoint.
Status
— (String
)The connection status. This parameter can return one of the following values:
-
"successful"
-
"testing"
-
"failed"
-
"deleting"
-
LastFailureMessage
— (String
)The error message when the connection last failed.
EndpointIdentifier
— (String
)The identifier of the endpoint. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens.
ReplicationInstanceIdentifier
— (String
)The replication instance identifier. This parameter is stored as a lowercase string.
-
(AWS.Response)
—
Returns:
deleteEndpoint(params = {}, callback) ⇒ AWS.Request
Deletes the specified endpoint.
Note: All tasks associated with the endpoint must be deleted before you can delete the endpoint.Service Reference:
Examples:
Delete Endpoint
/* Deletes the specified endpoint. All tasks associated with the endpoint must be deleted before you can delete the endpoint. */ var params = { EndpointArn: "arn:aws:dms:us-east-1:123456789012:endpoint:RAAR3R22XSH46S3PWLC3NJAWKM" }; dms.deleteEndpoint(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response /* data = { Endpoint: { EndpointArn: "arn:aws:dms:us-east-1:123456789012:endpoint:RAAR3R22XSH46S3PWLC3NJAWKM", EndpointIdentifier: "test-endpoint-1", EndpointType: "source", EngineName: "mysql", KmsKeyId: "arn:aws:kms:us-east-1:123456789012:key/4c1731d6-5435-ed4d-be13-d53411a7cfbd", Port: 3306, ServerName: "mydb.cx1llnox7iyx.us-west-2.rds.amazonaws.com", Status: "active", Username: "username" } } */ });
Calling the deleteEndpoint operation
var params = { EndpointArn: 'STRING_VALUE' /* required */ }; dms.deleteEndpoint(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
EndpointArn
— (String
)The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:Endpoint
— (map
)The endpoint that was deleted.
EndpointIdentifier
— (String
)The database endpoint identifier. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens.
EndpointType
— (String
)The type of endpoint. Valid values are
Possible values include:source
andtarget
."source"
"target"
EngineName
— (String
)The database engine name. Valid values, depending on the EndpointType, include
"mysql"
,"oracle"
,"postgres"
,"mariadb"
,"aurora"
,"aurora-postgresql"
,"redshift"
,"s3"
,"db2"
,"azuredb"
,"sybase"
,"dynamodb"
,"mongodb"
,"kinesis"
,"kafka"
,"elasticsearch"
,"documentdb"
,"sqlserver"
, and"neptune"
.EngineDisplayName
— (String
)The expanded name for the engine name. For example, if the
EngineName
parameter is "aurora," this value would be "Amazon Aurora MySQL."Username
— (String
)The user name used to connect to the endpoint.
ServerName
— (String
)The name of the server at the endpoint.
Port
— (Integer
)The port value used to access the endpoint.
DatabaseName
— (String
)The name of the database at the endpoint.
ExtraConnectionAttributes
— (String
)Additional connection attributes used to connect to the endpoint.
Status
— (String
)The status of the endpoint.
KmsKeyId
— (String
)An KMS key identifier that is used to encrypt the connection parameters for the endpoint.
If you don't specify a value for the
KmsKeyId
parameter, then DMS uses your default encryption key.KMS creates the default encryption key for your Amazon Web Services account. Your Amazon Web Services account has a different default encryption key for each Amazon Web Services Region.
EndpointArn
— (String
)The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
CertificateArn
— (String
)The Amazon Resource Name (ARN) used for SSL connection to the endpoint.
SslMode
— (String
)The SSL mode used to connect to the endpoint. The default value is
Possible values include:none
."none"
"require"
"verify-ca"
"verify-full"
ServiceAccessRoleArn
— (String
)The Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the
iam:PassRole
action.ExternalTableDefinition
— (String
)The external table definition.
ExternalId
— (String
)Value returned by a call to CreateEndpoint that can be used for cross-account validation. Use it on a subsequent call to CreateEndpoint to create the endpoint with a cross-account.
DynamoDbSettings
— (map
)The settings for the DynamoDB target endpoint. For more information, see the
DynamoDBSettings
structure.ServiceAccessRoleArn
— required — (String
)The Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the
iam:PassRole
action.
S3Settings
— (map
)The settings for the S3 target endpoint. For more information, see the
S3Settings
structure.ServiceAccessRoleArn
— (String
)The Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the
iam:PassRole
action. It is a required parameter that enables DMS to write and read objects from an S3 bucket.ExternalTableDefinition
— (String
)Specifies how tables are defined in the S3 source files only.
CsvRowDelimiter
— (String
)The delimiter used to separate rows in the .csv file for both source and target. The default is a carriage return (
\n
).CsvDelimiter
— (String
)The delimiter used to separate columns in the .csv file for both source and target. The default is a comma.
BucketFolder
— (String
)An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path
bucketFolder/schema_name/table_name/
. If this parameter isn't specified, then the path used isschema_name/table_name/
.BucketName
— (String
)The name of the S3 bucket.
CompressionType
— (String
)An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed. This parameter applies to both .csv and .parquet file formats.
Possible values include:"none"
"gzip"
EncryptionMode
— (String
)The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either
SSE_S3
(the default) orSSE_KMS
.Note: For theModifyEndpoint
operation, you can change the existing value of theEncryptionMode
parameter fromSSE_KMS
toSSE_S3
. But you can’t change the existing value fromSSE_S3
toSSE_KMS
.To use
SSE_S3
, you need an Identity and Access Management (IAM) role with permission to allow"arn:aws:s3:::dms-*"
to use the following actions:-
s3:CreateBucket
-
s3:ListBucket
-
s3:DeleteBucket
-
s3:GetBucketLocation
-
s3:GetObject
-
s3:PutObject
-
s3:DeleteObject
-
s3:GetObjectVersion
-
s3:GetBucketPolicy
-
s3:PutBucketPolicy
-
s3:DeleteBucketPolicy
"sse-s3"
"sse-kms"
-
ServerSideEncryptionKmsKeyId
— (String
)If you are using
SSE_KMS
for theEncryptionMode
, provide the KMS key ID. The key that you use needs an attached policy that enables Identity and Access Management (IAM) user permissions and allows use of the key.Here is a CLI example:
aws dms create-endpoint --endpoint-identifier value --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=value,BucketFolder=value,BucketName=value,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=value
DataFormat
— (String
)The format of the data that you want to use for output. You can choose one of the following:
-
csv
: This is a row-based file format with comma-separated values (.csv). -
parquet
: Apache Parquet (.parquet) is a columnar storage file format that features efficient compression and provides faster query response.
"csv"
"parquet"
-
EncodingType
— (String
)The type of encoding you are using:
-
RLE_DICTIONARY
uses a combination of bit-packing and run-length encoding to store repeated values more efficiently. This is the default. -
PLAIN
doesn't use encoding at all. Values are stored as they are. -
PLAIN_DICTIONARY
builds a dictionary of the values encountered in a given column. The dictionary is stored in a dictionary page for each column chunk.
"plain"
"plain-dictionary"
"rle-dictionary"
-
DictPageSizeLimit
— (Integer
)The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of
PLAIN
. This parameter defaults to 1024 * 1024 bytes (1 MiB), the maximum size of a dictionary page before it reverts toPLAIN
encoding. This size is used for .parquet file format only.RowGroupLength
— (Integer
)The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used for .parquet file format only.
If you choose a value larger than the maximum,
RowGroupLength
is set to the max row group length in bytes (64 * 1024 * 1024).DataPageSize
— (Integer
)The size of one data page in bytes. This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is used for .parquet file format only.
ParquetVersion
— (String
)The version of the Apache Parquet format that you want to use:
Possible values include:parquet_1_0
(the default) orparquet_2_0
."parquet-1-0"
"parquet-2-0"
EnableStatistics
— (Boolean
)A value that enables statistics for Parquet pages and row groups. Choose
true
to enable statistics,false
to disable. Statistics includeNULL
,DISTINCT
,MAX
, andMIN
values. This parameter defaults totrue
. This value is used for .parquet file format only.IncludeOpForFullLoad
— (Boolean
)A value that enables a full load to write INSERT operations to the comma-separated value (.csv) output files only to indicate how the rows were added to the source database.
Note: DMS supports theIncludeOpForFullLoad
parameter in versions 3.1.4 and later.For full load, records can only be inserted. By default (the
false
setting), no information is recorded in these output files for a full load to indicate that the rows were inserted at the source database. IfIncludeOpForFullLoad
is set totrue
ory
, the INSERT is recorded as an I annotation in the first field of the .csv file. This allows the format of your target records from a full load to be consistent with the target records from a CDC load.Note: This setting works together with theCdcInsertsOnly
and theCdcInsertsAndUpdates
parameters for output to .csv files only. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..CdcInsertsOnly
— (Boolean
)A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage (.parquet) output files. By default (the
false
setting), the first field in a .csv or .parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was inserted, updated, or deleted at the source database for a CDC load to the target.If
CdcInsertsOnly
is set totrue
ory
, only INSERTs from the source database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded depends on the value ofIncludeOpForFullLoad
. IfIncludeOpForFullLoad
is set totrue
, the first field of every CDC record is set to I to indicate the INSERT operation at the source. IfIncludeOpForFullLoad
is set tofalse
, every CDC record is written without a first field to indicate the INSERT operation at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..Note: DMS supports the interaction described preceding between theCdcInsertsOnly
andIncludeOpForFullLoad
parameters in versions 3.1.4 and later.CdcInsertsOnly
andCdcInsertsAndUpdates
can't both be set totrue
for the same endpoint. Set eitherCdcInsertsOnly
orCdcInsertsAndUpdates
totrue
for the same endpoint, but not both.TimestampColumnName
— (String
)A value that when nonblank causes DMS to add a column with timestamp information to the endpoint data for an Amazon S3 target.
Note: DMS supports theTimestampColumnName
parameter in versions 3.1.4 and later.DMS includes an additional
STRING
column in the .csv or .parquet object files of your migrated data when you setTimestampColumnName
to a nonblank value.For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS.
For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.
The string format for this timestamp column value is
yyyy-MM-dd HH:mm:ss.SSSSSS
. By default, the precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on the commit timestamp supported by DMS for the source database.When the
AddColumnName
parameter is set totrue
, DMS also includes a name for the timestamp column that you set withTimestampColumnName
.ParquetTimestampInMillisecond
— (Boolean
)A value that specifies the precision of any
TIMESTAMP
column values that are written to an Amazon S3 object file in .parquet format.Note: DMS supports theParquetTimestampInMillisecond
parameter in versions 3.1.4 and later.When
ParquetTimestampInMillisecond
is set totrue
ory
, DMS writes allTIMESTAMP
columns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes them with microsecond precision.Currently, Amazon Athena and Glue can handle only millisecond precision for
TIMESTAMP
values. Set this parameter totrue
for S3 endpoint object files that are .parquet formatted only if you plan to query or process the data with Athena or Glue.Note: DMS writes anyTIMESTAMP
column values written to an S3 file in .csv format with microsecond precision. SettingParquetTimestampInMillisecond
has no effect on the string format of the timestamp column value that is inserted by setting theTimestampColumnName
parameter.CdcInsertsAndUpdates
— (Boolean
)A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet (columnar storage) output files. The default setting is
false
, but whenCdcInsertsAndUpdates
is set totrue
ory
, only INSERTs and UPDATEs from the source database are migrated to the .csv or .parquet file.For .csv file format only, how these INSERTs and UPDATEs are recorded depends on the value of the
IncludeOpForFullLoad
parameter. IfIncludeOpForFullLoad
is set totrue
, the first field of every CDC record is set to eitherI
orU
to indicate INSERT and UPDATE operations at the source. But ifIncludeOpForFullLoad
is set tofalse
, CDC records are written without an indication of INSERT or UPDATE operations at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..Note: DMS supports the use of theCdcInsertsAndUpdates
parameter in versions 3.3.1 and later.CdcInsertsOnly
andCdcInsertsAndUpdates
can't both be set totrue
for the same endpoint. Set eitherCdcInsertsOnly
orCdcInsertsAndUpdates
totrue
for the same endpoint, but not both.DatePartitionEnabled
— (Boolean
)When set to
true
, this parameter partitions S3 bucket folders based on transaction commit dates. The default value isfalse
. For more information about date-based folder partitioning, see Using date-based folder partitioning.DatePartitionSequence
— (String
)Identifies the sequence of the date format to use during folder partitioning. The default value is
Possible values include:YYYYMMDD
. Use this parameter whenDatePartitionedEnabled
is set totrue
."YYYYMMDD"
"YYYYMMDDHH"
"YYYYMM"
"MMYYYYDD"
"DDMMYYYY"
DatePartitionDelimiter
— (String
)Specifies a date separating delimiter to use during folder partitioning. The default value is
Possible values include:SLASH
. Use this parameter whenDatePartitionedEnabled
is set totrue
."SLASH"
"UNDERSCORE"
"DASH"
"NONE"
UseCsvNoSupValue
— (Boolean
)This setting applies if the S3 output files during a change data capture (CDC) load are written in .csv format. If set to
true
for columns not included in the supplemental log, DMS uses the value specified byCsvNoSupValue
. If not set or set tofalse
, DMS uses the null value for these columns.Note: This setting is supported in DMS versions 3.4.1 and later.CsvNoSupValue
— (String
)This setting only applies if your Amazon S3 output files during a change data capture (CDC) load are written in .csv format. If
UseCsvNoSupValue
is set to true, specify a string value that you want DMS to use for all columns not included in the supplemental log. If you do not specify a string value, DMS uses the null value for these columns regardless of theUseCsvNoSupValue
setting.Note: This setting is supported in DMS versions 3.4.1 and later.PreserveTransactions
— (Boolean
)If set to
true
, DMS saves the transaction order for a change data capture (CDC) load on the Amazon S3 target specified byCdcPath
. For more information, see Capturing data changes (CDC) including transaction order on the S3 target.Note: This setting is supported in DMS versions 3.4.2 and later.CdcPath
— (String
)Specifies the folder path of CDC files. For an S3 source, this setting is required if a task captures change data; otherwise, it's optional. If
CdcPath
is set, DMS reads CDC files from this path and replicates the data changes to the target endpoint. For an S3 target if you setPreserveTransactions
totrue
, DMS verifies that you have set this parameter to a folder path on your S3 target where DMS can save the transaction order for the CDC load. DMS creates this CDC folder path in either your S3 target working directory or the S3 target location specified byBucketFolder
andBucketName
.For example, if you specify
CdcPath
asMyChangedData
, and you specifyBucketName
asMyTargetBucket
but do not specifyBucketFolder
, DMS creates the CDC folder path following:MyTargetBucket/MyChangedData
.If you specify the same
CdcPath
, and you specifyBucketName
asMyTargetBucket
andBucketFolder
asMyTargetData
, DMS creates the CDC folder path following:MyTargetBucket/MyTargetData/MyChangedData
.For more information on CDC including transaction order on an S3 target, see Capturing data changes (CDC) including transaction order on the S3 target.
Note: This setting is supported in DMS versions 3.4.2 and later.CannedAclForObjects
— (String
)A value that enables DMS to specify a predefined (canned) access control list for objects created in an Amazon S3 bucket as .csv or .parquet files. For more information about Amazon S3 canned ACLs, see Canned ACL in the Amazon S3 Developer Guide.
The default value is NONE. Valid values include NONE, PRIVATE, PUBLIC_READ, PUBLIC_READ_WRITE, AUTHENTICATED_READ, AWS_EXEC_READ, BUCKET_OWNER_READ, and BUCKET_OWNER_FULL_CONTROL.
Possible values include:"none"
"private"
"public-read"
"public-read-write"
"authenticated-read"
"aws-exec-read"
"bucket-owner-read"
"bucket-owner-full-control"
AddColumnName
— (Boolean
)An optional parameter that, when set to
true
ory
, you can use to add column name information to the .csv output file.The default value is
false
. Valid values aretrue
,false
,y
, andn
.CdcMaxBatchInterval
— (Integer
)Maximum length of the interval, defined in seconds, after which to output a file to Amazon S3.
When
CdcMaxBatchInterval
andCdcMinFileSize
are both specified, the file write is triggered by whichever parameter condition is met first within an DMS CloudFormation template.The default value is 60 seconds.
CdcMinFileSize
— (Integer
)Minimum file size, defined in megabytes, to reach for a file output to Amazon S3.
When
CdcMinFileSize
andCdcMaxBatchInterval
are both specified, the file write is triggered by whichever parameter condition is met first within an DMS CloudFormation template.The default value is 32 MB.
CsvNullValue
— (String
)An optional parameter that specifies how DMS treats null values. While handling the null value, you can use this parameter to pass a user-defined string as null when writing to the target. For example, when target columns are not nullable, you can use this option to differentiate between the empty string value and the null value. So, if you set this parameter value to the empty string ("" or ''), DMS treats the empty string as the null value instead of
NULL
.The default value is
NULL
. Valid values include any valid string.IgnoreHeaderRows
— (Integer
)When this value is set to 1, DMS ignores the first row header in a .csv file. A value of 1 turns on the feature; a value of 0 turns off the feature.
The default is 0.
MaxFileSize
— (Integer
)A value that specifies the maximum size (in KB) of any .csv file to be created while migrating to an S3 target during full load.
The default value is 1,048,576 KB (1 GB). Valid values include 1 to 1,048,576.
Rfc4180
— (Boolean
)For an S3 source, when this value is set to
true
ory
, each leading double quotation mark has to be followed by an ending double quotation mark. This formatting complies with RFC 4180. When this value is set tofalse
orn
, string literals are copied to the target as is. In this case, a delimiter (row or column) signals the end of the field. Thus, you can't use a delimiter as part of the string, because it signals the end of the value.For an S3 target, an optional parameter used to set behavior to comply with RFC 4180 for data migrated to Amazon S3 using .csv file format only. When this value is set to
true
ory
using Amazon S3 as a target, if the data has quotation marks or newline characters in it, DMS encloses the entire column with an additional pair of double quotation marks ("). Every quotation mark within the data is repeated twice.The default value is
true
. Valid values includetrue
,false
,y
, andn
.
DmsTransferSettings
— (map
)The settings in JSON format for the DMS transfer type of source endpoint.
Possible settings include the following:
-
ServiceAccessRoleArn
- - The Amazon Resource Name (ARN) used by the service access IAM role. The role must allow theiam:PassRole
action. -
BucketName
- The name of the S3 bucket to use.
Shorthand syntax for these settings is as follows:
ServiceAccessRoleArn=string,BucketName=string,
JSON syntax for these settings is as follows:
{ "ServiceAccessRoleArn": "string", "BucketName": "string"}
ServiceAccessRoleArn
— (String
)The Amazon Resource Name (ARN) used by the service access IAM role. The role must allow the
iam:PassRole
action.BucketName
— (String
)The name of the S3 bucket to use.
-
MongoDbSettings
— (map
)The settings for the MongoDB source endpoint. For more information, see the
MongoDbSettings
structure.Username
— (String
)The user name you use to access the MongoDB source endpoint.
Password
— (String
)The password for the user account you use to access the MongoDB source endpoint.
ServerName
— (String
)The name of the server on the MongoDB source endpoint.
Port
— (Integer
)The port value for the MongoDB source endpoint.
DatabaseName
— (String
)The database name on the MongoDB source endpoint.
AuthType
— (String
)The authentication type you use to access the MongoDB source endpoint.
When when set to
Possible values include:"no"
, user name and password parameters are not used and can be empty."no"
"password"
AuthMechanism
— (String
)The authentication mechanism you use to access the MongoDB source endpoint.
For the default value, in MongoDB version 2.x,
Possible values include:"default"
is"mongodb_cr"
. For MongoDB version 3.x or later,"default"
is"scram_sha_1"
. This setting isn't used whenAuthType
is set to"no"
."default"
"mongodb_cr"
"scram_sha_1"
NestingLevel
— (String
)Specifies either document or table mode.
Default value is
Possible values include:"none"
. Specify"none"
to use document mode. Specify"one"
to use table mode."none"
"one"
ExtractDocId
— (String
)Specifies the document ID. Use this setting when
NestingLevel
is set to"none"
.Default value is
"false"
.DocsToInvestigate
— (String
)Indicates the number of documents to preview to determine the document organization. Use this setting when
NestingLevel
is set to"one"
.Must be a positive value greater than
0
. Default value is1000
.AuthSource
— (String
)The MongoDB database name. This setting isn't used when
AuthType
is set to"no"
.The default is
"admin"
.KmsKeyId
— (String
)The KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the
KmsKeyId
parameter, then DMS uses your default encryption key. KMS creates the default encryption key for your Amazon Web Services account. Your Amazon Web Services account has a different default encryption key for each Amazon Web Services Region.SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the MongoDB endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the MongoDB endpoint connection details.
KinesisSettings
— (map
)The settings for the Amazon Kinesis target endpoint. For more information, see the
KinesisSettings
structure.StreamArn
— (String
)The Amazon Resource Name (ARN) for the Amazon Kinesis Data Streams endpoint.
MessageFormat
— (String
)The output format for the records created on the endpoint. The message format is
Possible values include:JSON
(default) orJSON_UNFORMATTED
(a single line with no tab)."json"
"json-unformatted"
ServiceAccessRoleArn
— (String
)The Amazon Resource Name (ARN) for the IAM role that DMS uses to write to the Kinesis data stream. The role must allow the
iam:PassRole
action.IncludeTransactionDetails
— (Boolean
)Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for
transaction_id
, previoustransaction_id
, andtransaction_record_id
(the record offset within a transaction). The default isfalse
.IncludePartitionValue
— (Boolean
)Shows the partition value within the Kinesis message output, unless the partition type is
schema-table-type
. The default isfalse
.PartitionIncludeSchemaTable
— (Boolean
)Prefixes schema and table names to partition values, when the partition type is
primary-key-type
. Doing this increases data distribution among Kinesis shards. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same shard, which causes throttling. The default isfalse
.IncludeTableAlterOperations
— (Boolean
)Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table
,drop-table
,add-column
,drop-column
, andrename-column
. The default isfalse
.IncludeControlDetails
— (Boolean
)Shows detailed control information for table definition, column definition, and table and column changes in the Kinesis message output. The default is
false
.IncludeNullAndEmpty
— (Boolean
)Include NULL and empty columns for records migrated to the endpoint. The default is
false
.NoHexPrefix
— (Boolean
)Set this optional parameter to
true
to avoid adding a '0x' prefix to raw data in hexadecimal format. For example, by default, DMS adds a '0x' prefix to the LOB column type in hexadecimal format moving from an Oracle source to an Amazon Kinesis target. Use theNoHexPrefix
endpoint setting to enable migration of RAW data type columns without adding the '0x' prefix.
KafkaSettings
— (map
)The settings for the Apache Kafka target endpoint. For more information, see the
KafkaSettings
structure.Broker
— (String
)A comma-separated list of one or more broker locations in your Kafka cluster that host your Kafka instance. Specify each broker location in the form
broker-hostname-or-ip:port
. For example,"ec2-12-345-678-901.compute-1.amazonaws.com:2345"
. For more information and examples of specifying a list of broker locations, see Using Apache Kafka as a target for Database Migration Service in the Database Migration Service User Guide.Topic
— (String
)The topic to which you migrate the data. If you don't specify a topic, DMS specifies
"kafka-default-topic"
as the migration topic.MessageFormat
— (String
)The output format for the records created on the endpoint. The message format is
Possible values include:JSON
(default) orJSON_UNFORMATTED
(a single line with no tab)."json"
"json-unformatted"
IncludeTransactionDetails
— (Boolean
)Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for
transaction_id
, previoustransaction_id
, andtransaction_record_id
(the record offset within a transaction). The default isfalse
.IncludePartitionValue
— (Boolean
)Shows the partition value within the Kafka message output unless the partition type is
schema-table-type
. The default isfalse
.PartitionIncludeSchemaTable
— (Boolean
)Prefixes schema and table names to partition values, when the partition type is
primary-key-type
. Doing this increases data distribution among Kafka partitions. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same partition, which causes throttling. The default isfalse
.IncludeTableAlterOperations
— (Boolean
)Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table
,drop-table
,add-column
,drop-column
, andrename-column
. The default isfalse
.IncludeControlDetails
— (Boolean
)Shows detailed control information for table definition, column definition, and table and column changes in the Kafka message output. The default is
false
.MessageMaxBytes
— (Integer
)The maximum size in bytes for records created on the endpoint The default is 1,000,000.
IncludeNullAndEmpty
— (Boolean
)Include NULL and empty columns for records migrated to the endpoint. The default is
false
.SecurityProtocol
— (String
)Set secure connection to a Kafka target endpoint using Transport Layer Security (TLS). Options include
Possible values include:ssl-encryption
,ssl-authentication
, andsasl-ssl
.sasl-ssl
requiresSaslUsername
andSaslPassword
."plaintext"
"ssl-authentication"
"ssl-encryption"
"sasl-ssl"
SslClientCertificateArn
— (String
)The Amazon Resource Name (ARN) of the client certificate used to securely connect to a Kafka target endpoint.
SslClientKeyArn
— (String
)The Amazon Resource Name (ARN) for the client private key used to securely connect to a Kafka target endpoint.
SslClientKeyPassword
— (String
)The password for the client private key used to securely connect to a Kafka target endpoint.
SslCaCertificateArn
— (String
)The Amazon Resource Name (ARN) for the private certificate authority (CA) cert that DMS uses to securely connect to your Kafka target endpoint.
SaslUsername
— (String
)The secure user name you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.
SaslPassword
— (String
)The secure password you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.
NoHexPrefix
— (Boolean
)Set this optional parameter to
true
to avoid adding a '0x' prefix to raw data in hexadecimal format. For example, by default, DMS adds a '0x' prefix to the LOB column type in hexadecimal format moving from an Oracle source to a Kafka target. Use theNoHexPrefix
endpoint setting to enable migration of RAW data type columns without adding the '0x' prefix.
ElasticsearchSettings
— (map
)The settings for the Elasticsearch source endpoint. For more information, see the
ElasticsearchSettings
structure.ServiceAccessRoleArn
— required — (String
)The Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the
iam:PassRole
action.EndpointUri
— required — (String
)The endpoint for the Elasticsearch cluster. DMS uses HTTPS if a transport protocol (http/https) is not specified.
FullLoadErrorPercentage
— (Integer
)The maximum percentage of records that can fail to be written before a full load operation stops.
To avoid early failure, this counter is only effective after 1000 records are transferred. Elasticsearch also has the concept of error monitoring during the last 10 minutes of an Observation Window. If transfer of all records fail in the last 10 minutes, the full load operation stops.
ErrorRetryDuration
— (Integer
)The maximum number of seconds for which DMS retries failed API requests to the Elasticsearch cluster.
NeptuneSettings
— (map
)The settings for the Amazon Neptune target endpoint. For more information, see the
NeptuneSettings
structure.ServiceAccessRoleArn
— (String
)The Amazon Resource Name (ARN) of the service role that you created for the Neptune target endpoint. The role must allow the
iam:PassRole
action. For more information, see Creating an IAM Service Role for Accessing Amazon Neptune as a Target in the Database Migration Service User Guide.S3BucketName
— required — (String
)The name of the Amazon S3 bucket where DMS can temporarily store migrated graph data in .csv files before bulk-loading it to the Neptune target database. DMS maps the SQL source data to graph data before storing it in these .csv files.
S3BucketFolder
— required — (String
)A folder path where you want DMS to store migrated graph data in the S3 bucket specified by
S3BucketName
ErrorRetryDuration
— (Integer
)The number of milliseconds for DMS to wait to retry a bulk-load of migrated graph data to the Neptune target database before raising an error. The default is 250.
MaxFileSize
— (Integer
)The maximum size in kilobytes of migrated graph data stored in a .csv file before DMS bulk-loads the data to the Neptune target database. The default is 1,048,576 KB. If the bulk load is successful, DMS clears the bucket, ready to store the next batch of migrated graph data.
MaxRetryCount
— (Integer
)The number of times for DMS to retry a bulk load of migrated graph data to the Neptune target database before raising an error. The default is 5.
IamAuthEnabled
— (Boolean
)If you want Identity and Access Management (IAM) authorization enabled for this endpoint, set this parameter to
true
. Then attach the appropriate IAM policy document to your service role specified byServiceAccessRoleArn
. The default isfalse
.
RedshiftSettings
— (map
)Settings for the Amazon Redshift endpoint.
AcceptAnyDate
— (Boolean
)A value that indicates to allow any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error. You can choose
true
orfalse
(the default).This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data doesn't match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.
AfterConnectScript
— (String
)Code to run after connecting. This parameter should contain the code itself, not the name of a file containing the code.
BucketFolder
— (String
)An S3 folder where the comma-separated-value (.csv) files are stored before being uploaded to the target Redshift cluster.
For full load mode, DMS converts source records into .csv files and loads them to the BucketFolder/TableID path. DMS uses the Redshift
COPY
command to upload the .csv files to the target table. The files are deleted once theCOPY
operation has finished. For more information, see COPY in the Amazon Redshift Database Developer Guide.For change-data-capture (CDC) mode, DMS creates a NetChanges table, and loads the .csv files to this BucketFolder/NetChangesTableID path.
BucketName
— (String
)The name of the intermediate S3 bucket used to store .csv files before uploading data to Redshift.
CaseSensitiveNames
— (Boolean
)If Amazon Redshift is configured to support case sensitive schema names, set
CaseSensitiveNames
totrue
. The default isfalse
.CompUpdate
— (Boolean
)If you set
CompUpdate
totrue
Amazon Redshift applies automatic compression if the table is empty. This applies even if the table columns already have encodings other thanRAW
. If you setCompUpdate
tofalse
, automatic compression is disabled and existing column encodings aren't changed. The default istrue
.ConnectionTimeout
— (Integer
)A value that sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.
DatabaseName
— (String
)The name of the Amazon Redshift data warehouse (service) that you are working with.
DateFormat
— (String
)The date format that you are using. Valid values are
auto
(case-sensitive), your date format string enclosed in quotes, or NULL. If this parameter is left unset (NULL), it defaults to a format of 'YYYY-MM-DD'. Usingauto
recognizes most strings, even some that aren't supported when you use a date format string.If your date and time values use formats different from each other, set this to
auto
.EmptyAsNull
— (Boolean
)A value that specifies whether DMS should migrate empty CHAR and VARCHAR fields as NULL. A value of
true
sets empty CHAR and VARCHAR fields to null. The default isfalse
.EncryptionMode
— (String
)The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either
SSE_S3
(the default) orSSE_KMS
.Note: For theModifyEndpoint
operation, you can change the existing value of theEncryptionMode
parameter fromSSE_KMS
toSSE_S3
. But you can’t change the existing value fromSSE_S3
toSSE_KMS
.To use
Possible values include:SSE_S3
, create an Identity and Access Management (IAM) role with a policy that allows"arn:aws:s3:::*"
to use the following actions:"s3:PutObject", "s3:ListBucket"
"sse-s3"
"sse-kms"
ExplicitIds
— (Boolean
)This setting is only valid for a full-load migration task. Set
ExplicitIds
totrue
to have tables withIDENTITY
columns override their auto-generated values with explicit values loaded from the source data files used to populate the tables. The default isfalse
.FileTransferUploadStreams
— (Integer
)The number of threads used to upload a single file. This parameter accepts a value from 1 through 64. It defaults to 10.
The number of parallel streams used to upload a single .csv file to an S3 bucket using S3 Multipart Upload. For more information, see Multipart upload overview.
FileTransferUploadStreams
accepts a value from 1 through 64. It defaults to 10.LoadTimeout
— (Integer
)The amount of time to wait (in milliseconds) before timing out of operations performed by DMS on a Redshift cluster, such as Redshift COPY, INSERT, DELETE, and UPDATE.
MaxFileSize
— (Integer
)The maximum size (in KB) of any .csv file used to load data on an S3 bucket and transfer data to Amazon Redshift. It defaults to 1048576KB (1 GB).
Password
— (String
)The password for the user named in the
username
property.Port
— (Integer
)The port number for Amazon Redshift. The default value is 5439.
RemoveQuotes
— (Boolean
)A value that specifies to remove surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained. Choose
true
to remove quotation marks. The default isfalse
.ReplaceInvalidChars
— (String
)A list of characters that you want to replace. Use with
ReplaceChars
.ReplaceChars
— (String
)A value that specifies to replaces the invalid characters specified in
ReplaceInvalidChars
, substituting the specified characters instead. The default is"?"
.ServerName
— (String
)The name of the Amazon Redshift cluster you are using.
ServiceAccessRoleArn
— (String
)The Amazon Resource Name (ARN) of the IAM role that has access to the Amazon Redshift service. The role must allow the
iam:PassRole
action.ServerSideEncryptionKmsKeyId
— (String
)The KMS key ID. If you are using
SSE_KMS
for theEncryptionMode
, provide this key ID. The key that you use needs an attached policy that enables IAM user permissions and allows use of the key.TimeFormat
— (String
)The time format that you want to use. Valid values are
auto
(case-sensitive),'timeformat_string'
,'epochsecs'
, or'epochmillisecs'
. It defaults to 10. Usingauto
recognizes most strings, even some that aren't supported when you use a time format string.If your date and time values use formats different from each other, set this parameter to
auto
.TrimBlanks
— (Boolean
)A value that specifies to remove the trailing white space characters from a VARCHAR string. This parameter applies only to columns with a VARCHAR data type. Choose
true
to remove unneeded white space. The default isfalse
.TruncateColumns
— (Boolean
)A value that specifies to truncate data in columns to the appropriate number of characters, so that the data fits in the column. This parameter applies only to columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. Choose
true
to truncate data. The default isfalse
.Username
— (String
)An Amazon Redshift user name for a registered user.
WriteBufferSize
— (Integer
)The size (in KB) of the in-memory file write buffer used when generating .csv files on the local disk at the DMS replication instance. The default value is 1000 (buffer size is 1000KB).
SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the Amazon Redshift endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the Amazon Redshift endpoint connection details.
PostgreSQLSettings
— (map
)The settings for the PostgreSQL source and target endpoint. For more information, see the
PostgreSQLSettings
structure.AfterConnectScript
— (String
)For use with change data capture (CDC) only, this attribute has DMS bypass foreign keys and user triggers to reduce the time it takes to bulk load data.
Example:
afterConnectScript=SET session_replication_role='replica'
CaptureDdls
— (Boolean
)To capture DDL events, DMS creates various artifacts in the PostgreSQL database when the task starts. You can later remove these artifacts.
If this value is set to
N
, you don't have to create tables or triggers on the source database.MaxFileSize
— (Integer
)Specifies the maximum size (in KB) of any .csv file used to transfer data to PostgreSQL.
Example:
maxFileSize=512
DatabaseName
— (String
)Database name for the endpoint.
DdlArtifactsSchema
— (String
)The schema in which the operational DDL database artifacts are created.
Example:
ddlArtifactsSchema=xyzddlschema;
ExecuteTimeout
— (Integer
)Sets the client statement timeout for the PostgreSQL instance, in seconds. The default value is 60 seconds.
Example:
executeTimeout=100;
FailTasksOnLobTruncation
— (Boolean
)When set to
true
, this value causes a task to fail if the actual size of a LOB column is greater than the specifiedLobMaxSize
.If task is set to Limited LOB mode and this option is set to true, the task fails instead of truncating the LOB data.
HeartbeatEnable
— (Boolean
)The write-ahead log (WAL) heartbeat feature mimics a dummy transaction. By doing this, it prevents idle logical replication slots from holding onto old WAL logs, which can result in storage full situations on the source. This heartbeat keeps
restart_lsn
moving and prevents storage full scenarios.HeartbeatSchema
— (String
)Sets the schema in which the heartbeat artifacts are created.
HeartbeatFrequency
— (Integer
)Sets the WAL heartbeat frequency (in minutes).
Password
— (String
)Endpoint connection password.
Port
— (Integer
)Endpoint TCP port.
ServerName
— (String
)Fully qualified domain name of the endpoint.
Username
— (String
)Endpoint connection user name.
SlotName
— (String
)Sets the name of a previously created logical replication slot for a change data capture (CDC) load of the PostgreSQL source instance.
When used with the
CdcStartPosition
request parameter for the DMS API , this attribute also makes it possible to use native CDC start points. DMS verifies that the specified logical replication slot exists before starting the CDC load task. It also verifies that the task was created with a valid setting ofCdcStartPosition
. If the specified slot doesn't exist or the task doesn't have a validCdcStartPosition
setting, DMS raises an error.For more information about setting the
CdcStartPosition
request parameter, see Determining a CDC native start point in the Database Migration Service User Guide. For more information about usingCdcStartPosition
, see CreateReplicationTask, StartReplicationTask, and ModifyReplicationTask.PluginName
— (String
)Specifies the plugin to use to create a replication slot.
Possible values include:"no-preference"
"test-decoding"
"pglogical"
SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the PostgreSQL endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the PostgreSQL endpoint connection details.
MySQLSettings
— (map
)The settings for the MySQL source and target endpoint. For more information, see the
MySQLSettings
structure.AfterConnectScript
— (String
)Specifies a script to run immediately after DMS connects to the endpoint. The migration task continues running regardless if the SQL statement succeeds or fails.
For this parameter, provide the code of the script itself, not the name of a file containing the script.
CleanSourceMetadataOnMismatch
— (Boolean
)Adjusts the behavior of DMS when migrating from an SQL Server source database that is hosted as part of an Always On availability group cluster. If you need DMS to poll all the nodes in the Always On cluster for transaction backups, set this attribute to
false
.DatabaseName
— (String
)Database name for the endpoint. For a MySQL source or target endpoint, don't explicitly specify the database using the
DatabaseName
request parameter on either theCreateEndpoint
orModifyEndpoint
API call. SpecifyingDatabaseName
when you create or modify a MySQL endpoint replicates all the task tables to this single database. For MySQL endpoints, you specify the database only when you specify the schema in the table-mapping rules of the DMS task.EventsPollInterval
— (Integer
)Specifies how often to check the binary log for new changes/events when the database is idle.
Example:
eventsPollInterval=5;
In the example, DMS checks for changes in the binary logs every five seconds.
TargetDbType
— (String
)Specifies where to migrate source tables on the target, either to a single database or multiple databases.
Example:
Possible values include:targetDbType=MULTIPLE_DATABASES
"specific-database"
"multiple-databases"
MaxFileSize
— (Integer
)Specifies the maximum size (in KB) of any .csv file used to transfer data to a MySQL-compatible database.
Example:
maxFileSize=512
ParallelLoadThreads
— (Integer
)Improves performance when loading data into the MySQL-compatible target database. Specifies how many threads to use to load the data into the MySQL-compatible target database. Setting a large number of threads can have an adverse effect on database performance, because a separate connection is required for each thread.
Example:
parallelLoadThreads=1
Password
— (String
)Endpoint connection password.
Port
— (Integer
)Endpoint TCP port.
ServerName
— (String
)Fully qualified domain name of the endpoint.
ServerTimezone
— (String
)Specifies the time zone for the source MySQL database.
Example:
serverTimezone=US/Pacific;
Note: Do not enclose time zones in single quotes.
Username
— (String
)Endpoint connection user name.
SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the MySQL endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the MySQL endpoint connection details.
OracleSettings
— (map
)The settings for the Oracle source and target endpoint. For more information, see the
OracleSettings
structure.AddSupplementalLogging
— (Boolean
)Set this attribute to set up table-level supplemental logging for the Oracle database. This attribute enables PRIMARY KEY supplemental logging on all tables selected for a migration task.
If you use this option, you still need to enable database-level supplemental logging.
ArchivedLogDestId
— (Integer
)Specifies the ID of the destination for the archived redo logs. This value should be the same as a number in the dest_id column of the v$archived_log view. If you work with an additional redo log destination, use the
AdditionalArchivedLogDestId
option to specify the additional destination ID. Doing this improves performance by ensuring that the correct logs are accessed from the outset.AdditionalArchivedLogDestId
— (Integer
)Set this attribute with
ArchivedLogDestId
in a primary/ standby setup. This attribute is useful in the case of a switchover. In this case, DMS needs to know which destination to get archive redo logs from to read changes. This need arises because the previous primary instance is now a standby instance after switchover.Although DMS supports the use of the Oracle
RESETLOGS
option to open the database, never useRESETLOGS
unless necessary. For additional information aboutRESETLOGS
, see RMAN Data Repair Concepts in the Oracle Database Backup and Recovery User's Guide.ExtraArchivedLogDestIds
— (Array<Integer>
)Specifies the IDs of one more destinations for one or more archived redo logs. These IDs are the values of the
dest_id
column in thev$archived_log
view. Use this setting with thearchivedLogDestId
extra connection attribute in a primary-to-single setup or a primary-to-multiple-standby setup.This setting is useful in a switchover when you use an Oracle Data Guard database as a source. In this case, DMS needs information about what destination to get archive redo logs from to read changes. DMS needs this because after the switchover the previous primary is a standby instance. For example, in a primary-to-single standby setup you might apply the following settings.
archivedLogDestId=1; ExtraArchivedLogDestIds=[2]
In a primary-to-multiple-standby setup, you might apply the following settings.
archivedLogDestId=1; ExtraArchivedLogDestIds=[2,3,4]
Although DMS supports the use of the Oracle
RESETLOGS
option to open the database, never useRESETLOGS
unless it's necessary. For more information aboutRESETLOGS
, see RMAN Data Repair Concepts in the Oracle Database Backup and Recovery User's Guide.AllowSelectNestedTables
— (Boolean
)Set this attribute to
true
to enable replication of Oracle tables containing columns that are nested tables or defined types.ParallelAsmReadThreads
— (Integer
)Set this attribute to change the number of threads that DMS configures to perform a change data capture (CDC) load using Oracle Automatic Storage Management (ASM). You can specify an integer value between 2 (the default) and 8 (the maximum). Use this attribute together with the
readAheadBlocks
attribute.ReadAheadBlocks
— (Integer
)Set this attribute to change the number of read-ahead blocks that DMS configures to perform a change data capture (CDC) load using Oracle Automatic Storage Management (ASM). You can specify an integer value between 1000 (the default) and 200,000 (the maximum).
AccessAlternateDirectly
— (Boolean
)Set this attribute to
false
in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This tells the DMS instance to not access redo logs through any specified path prefix replacement using direct file access.UseAlternateFolderForOnline
— (Boolean
)Set this attribute to
true
in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This tells the DMS instance to use any specified prefix replacement to access all online redo logs.OraclePathPrefix
— (String
)Set this string attribute to the required value in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This value specifies the default Oracle root used to access the redo logs.
UsePathPrefix
— (String
)Set this string attribute to the required value in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This value specifies the path prefix used to replace the default Oracle root to access the redo logs.
ReplacePathPrefix
— (Boolean
)Set this attribute to true in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This setting tells DMS instance to replace the default Oracle root with the specified
usePathPrefix
setting to access the redo logs.EnableHomogenousTablespace
— (Boolean
)Set this attribute to enable homogenous tablespace replication and create existing tables or indexes under the same tablespace on the target.
DirectPathNoLog
— (Boolean
)When set to
true
, this attribute helps to increase the commit rate on the Oracle target database by writing directly to tables and not writing a trail to database logs.ArchivedLogsOnly
— (Boolean
)When this field is set to
Y
, DMS only accesses the archived redo logs. If the archived redo logs are stored on Oracle ASM only, the DMS user account needs to be granted ASM privileges.AsmPassword
— (String
)For an Oracle source endpoint, your Oracle Automatic Storage Management (ASM) password. You can set this value from the
asm_user_password
value. You set this value as part of the comma-separated value that you set to thePassword
request parameter when you create the endpoint to access transaction logs using Binary Reader. For more information, see Configuration for change data capture (CDC) on an Oracle source database.AsmServer
— (String
)For an Oracle source endpoint, your ASM server address. You can set this value from the
asm_server
value. You setasm_server
as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database.AsmUser
— (String
)For an Oracle source endpoint, your ASM user name. You can set this value from the
asm_user
value. You setasm_user
as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database.CharLengthSemantics
— (String
)Specifies whether the length of a character column is in bytes or in characters. To indicate that the character column length is in characters, set this attribute to
CHAR
. Otherwise, the character column length is in bytes.Example:
Possible values include:charLengthSemantics=CHAR;
"default"
"char"
"byte"
DatabaseName
— (String
)Database name for the endpoint.
DirectPathParallelLoad
— (Boolean
)When set to
true
, this attribute specifies a parallel load whenuseDirectPathFullLoad
is set toY
. This attribute also only applies when you use the DMS parallel load feature. Note that the target table cannot have any constraints or indexes.FailTasksOnLobTruncation
— (Boolean
)When set to
true
, this attribute causes a task to fail if the actual size of an LOB column is greater than the specifiedLobMaxSize
.If a task is set to limited LOB mode and this option is set to
true
, the task fails instead of truncating the LOB data.NumberDatatypeScale
— (Integer
)Specifies the number scale. You can select a scale up to 38, or you can select FLOAT. By default, the NUMBER data type is converted to precision 38, scale 10.
Example:
numberDataTypeScale=12
Password
— (String
)Endpoint connection password.
Port
— (Integer
)Endpoint TCP port.
ReadTableSpaceName
— (Boolean
)When set to
true
, this attribute supports tablespace replication.RetryInterval
— (Integer
)Specifies the number of seconds that the system waits before resending a query.
Example:
retryInterval=6;
SecurityDbEncryption
— (String
)For an Oracle source endpoint, the transparent data encryption (TDE) password required by AWM DMS to access Oracle redo logs encrypted by TDE using Binary Reader. It is also the
TDE_Password
part of the comma-separated value you set to thePassword
request parameter when you create the endpoint. TheSecurityDbEncryptian
setting is related to thisSecurityDbEncryptionName
setting. For more information, see Supported encryption methods for using Oracle as a source for DMS in the Database Migration Service User Guide.SecurityDbEncryptionName
— (String
)For an Oracle source endpoint, the name of a key used for the transparent data encryption (TDE) of the columns and tablespaces in an Oracle source database that is encrypted using TDE. The key value is the value of the
SecurityDbEncryption
setting. For more information on setting the key name value ofSecurityDbEncryptionName
, see the information and example for setting thesecurityDbEncryptionName
extra connection attribute in Supported encryption methods for using Oracle as a source for DMS in the Database Migration Service User Guide.ServerName
— (String
)Fully qualified domain name of the endpoint.
SpatialDataOptionToGeoJsonFunctionName
— (String
)Use this attribute to convert
SDO_GEOMETRY
toGEOJSON
format. By default, DMS calls theSDO2GEOJSON
custom function if present and accessible. Or you can create your own custom function that mimics the operation ofSDOGEOJSON
and setSpatialDataOptionToGeoJsonFunctionName
to call it instead.StandbyDelayTime
— (Integer
)Use this attribute to specify a time in minutes for the delay in standby sync. If the source is an Oracle Active Data Guard standby database, use this attribute to specify the time lag between primary and standby databases.
In DMS, you can create an Oracle CDC task that uses an Active Data Guard standby instance as a source for replicating ongoing changes. Doing this eliminates the need to connect to an active database that might be in production.
Username
— (String
)Endpoint connection user name.
UseBFile
— (Boolean
)Set this attribute to Y to capture change data using the Binary Reader utility. Set
UseLogminerReader
to N to set this attribute to Y. To use Binary Reader with Amazon RDS for Oracle as the source, you set additional attributes. For more information about using this setting with Oracle Automatic Storage Management (ASM), see Using Oracle LogMiner or DMS Binary Reader for CDC.UseDirectPathFullLoad
— (Boolean
)Set this attribute to Y to have DMS use a direct path full load. Specify this value to use the direct path protocol in the Oracle Call Interface (OCI). By using this OCI protocol, you can bulk-load Oracle target tables during a full load.
UseLogminerReader
— (Boolean
)Set this attribute to Y to capture change data using the Oracle LogMiner utility (the default). Set this attribute to N if you want to access the redo logs as a binary file. When you set
UseLogminerReader
to N, also setUseBfile
to Y. For more information on this setting and using Oracle ASM, see Using Oracle LogMiner or DMS Binary Reader for CDC in the DMS User Guide.SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the Oracle endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the Oracle endpoint connection details.SecretsManagerOracleAsmAccessRoleArn
— (String
)Required only if your Oracle endpoint uses Advanced Storage Manager (ASM). The full ARN of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the
SecretsManagerOracleAsmSecret
. ThisSecretsManagerOracleAsmSecret
has the secret value that allows access to the Oracle ASM of the endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerOracleAsmSecretId
. Or you can specify clear-text values forAsmUserName
,AsmPassword
, andAsmServerName
. You can't specify both. For more information on creating thisSecretsManagerOracleAsmSecret
and theSecretsManagerOracleAsmAccessRoleArn
andSecretsManagerOracleAsmSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerOracleAsmSecretId
— (String
)Required only if your Oracle endpoint uses Advanced Storage Manager (ASM). The full ARN, partial ARN, or friendly name of the
SecretsManagerOracleAsmSecret
that contains the Oracle ASM connection details for the Oracle endpoint.
SybaseSettings
— (map
)The settings for the SAP ASE source and target endpoint. For more information, see the
SybaseSettings
structure.DatabaseName
— (String
)Database name for the endpoint.
Password
— (String
)Endpoint connection password.
Port
— (Integer
)Endpoint TCP port.
ServerName
— (String
)Fully qualified domain name of the endpoint.
Username
— (String
)Endpoint connection user name.
SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the SAP ASE endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the SAP SAE endpoint connection details.
MicrosoftSQLServerSettings
— (map
)The settings for the Microsoft SQL Server source and target endpoint. For more information, see the
MicrosoftSQLServerSettings
structure.Port
— (Integer
)Endpoint TCP port.
BcpPacketSize
— (Integer
)The maximum size of the packets (in bytes) used to transfer data using BCP.
DatabaseName
— (String
)Database name for the endpoint.
ControlTablesFileGroup
— (String
)Specifies a file group for the DMS internal tables. When the replication task starts, all the internal DMS control tables (awsdms_ apply_exception, awsdms_apply, awsdms_changes) are created for the specified file group.
Password
— (String
)Endpoint connection password.
QuerySingleAlwaysOnNode
— (Boolean
)Cleans and recreates table metadata information on the replication instance when a mismatch occurs. An example is a situation where running an alter DDL statement on a table might result in different information about the table cached in the replication instance.
ReadBackupOnly
— (Boolean
)When this attribute is set to
Y
, DMS only reads changes from transaction log backups and doesn't read from the active transaction log file during ongoing replication. Setting this parameter toY
enables you to control active transaction log file growth during full load and ongoing replication tasks. However, it can add some source latency to ongoing replication.SafeguardPolicy
— (String
)Use this attribute to minimize the need to access the backup log and enable DMS to prevent truncation using one of the following two methods.
Start transactions in the database: This is the default method. When this method is used, DMS prevents TLOG truncation by mimicking a transaction in the database. As long as such a transaction is open, changes that appear after the transaction started aren't truncated. If you need Microsoft Replication to be enabled in your database, then you must choose this method.
Exclusively use sp_repldone within a single task: When this method is used, DMS reads the changes and then uses sp_repldone to mark the TLOG transactions as ready for truncation. Although this method doesn't involve any transactional activities, it can only be used when Microsoft Replication isn't running. Also, when using this method, only one DMS task can access the database at any given time. Therefore, if you need to run parallel DMS tasks against the same database, use the default method.
Possible values include:"rely-on-sql-server-replication-agent"
"exclusive-automatic-truncation"
"shared-automatic-truncation"
ServerName
— (String
)Fully qualified domain name of the endpoint.
Username
— (String
)Endpoint connection user name.
UseBcpFullLoad
— (Boolean
)Use this to attribute to transfer data for full-load operations using BCP. When the target table contains an identity column that does not exist in the source table, you must disable the use BCP for loading table option.
UseThirdPartyBackupDevice
— (Boolean
)When this attribute is set to
Y
, DMS processes third-party transaction log backups if they are created in native format.SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the SQL Server endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the SQL Server endpoint connection details.
IBMDb2Settings
— (map
)The settings for the IBM Db2 LUW source endpoint. For more information, see the
IBMDb2Settings
structure.DatabaseName
— (String
)Database name for the endpoint.
Password
— (String
)Endpoint connection password.
Port
— (Integer
)Endpoint TCP port. The default value is 50000.
ServerName
— (String
)Fully qualified domain name of the endpoint.
SetDataCaptureChanges
— (Boolean
)Enables ongoing replication (CDC) as a BOOLEAN value. The default is true.
CurrentLsn
— (String
)For ongoing replication (CDC), use CurrentLSN to specify a log sequence number (LSN) where you want the replication to start.
MaxKBytesPerRead
— (Integer
)Maximum number of bytes per read, as a NUMBER value. The default is 64 KB.
Username
— (String
)Endpoint connection user name.
SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the Db2 LUW endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the Db2 LUW endpoint connection details.
DocDbSettings
— (map
)Provides information that defines a DocumentDB endpoint.
Username
— (String
)The user name you use to access the DocumentDB source endpoint.
Password
— (String
)The password for the user account you use to access the DocumentDB source endpoint.
ServerName
— (String
)The name of the server on the DocumentDB source endpoint.
Port
— (Integer
)The port value for the DocumentDB source endpoint.
DatabaseName
— (String
)The database name on the DocumentDB source endpoint.
NestingLevel
— (String
)Specifies either document or table mode.
Default value is
Possible values include:"none"
. Specify"none"
to use document mode. Specify"one"
to use table mode."none"
"one"
ExtractDocId
— (Boolean
)Specifies the document ID. Use this setting when
NestingLevel
is set to"none"
.Default value is
"false"
.DocsToInvestigate
— (Integer
)Indicates the number of documents to preview to determine the document organization. Use this setting when
NestingLevel
is set to"one"
.Must be a positive value greater than
0
. Default value is1000
.KmsKeyId
— (String
)The KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the
KmsKeyId
parameter, then DMS uses your default encryption key. KMS creates the default encryption key for your Amazon Web Services account. Your Amazon Web Services account has a different default encryption key for each Amazon Web Services Region.SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the DocumentDB endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the DocumentDB endpoint connection details.
RedisSettings
— (map
)The settings for the Redis target endpoint. For more information, see the
RedisSettings
structure.ServerName
— required — (String
)Fully qualified domain name of the endpoint.
Port
— required — (Integer
)Transmission Control Protocol (TCP) port for the endpoint.
SslSecurityProtocol
— (String
)The connection to a Redis target endpoint using Transport Layer Security (TLS). Valid values include
plaintext
andssl-encryption
. The default isssl-encryption
. Thessl-encryption
option makes an encrypted connection. Optionally, you can identify an Amazon Resource Name (ARN) for an SSL certificate authority (CA) using theSslCaCertificateArn
setting. If an ARN isn't given for a CA, DMS uses the Amazon root CA.The
Possible values include:plaintext
option doesn't provide Transport Layer Security (TLS) encryption for traffic between endpoint and database."plaintext"
"ssl-encryption"
AuthType
— (String
)The type of authentication to perform when connecting to a Redis target. Options include
Possible values include:none
,auth-token
, andauth-role
. Theauth-token
option requires anAuthPassword
value to be provided. Theauth-role
option requiresAuthUserName
andAuthPassword
values to be provided."none"
"auth-role"
"auth-token"
AuthUserName
— (String
)The user name provided with the
auth-role
option of theAuthType
setting for a Redis target endpoint.AuthPassword
— (String
)The password provided with the
auth-role
andauth-token
options of theAuthType
setting for a Redis target endpoint.SslCaCertificateArn
— (String
)The Amazon Resource Name (ARN) for the certificate authority (CA) that DMS uses to connect to your Redis target endpoint.
-
(AWS.Response)
—
Returns:
deleteEventSubscription(params = {}, callback) ⇒ AWS.Request
Deletes an DMS event subscription.
Service Reference:
Examples:
Calling the deleteEventSubscription operation
var params = { SubscriptionName: 'STRING_VALUE' /* required */ }; dms.deleteEventSubscription(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
SubscriptionName
— (String
)The name of the DMS event notification subscription to be deleted.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:EventSubscription
— (map
)The event subscription that was deleted.
CustomerAwsId
— (String
)The Amazon Web Services customer account associated with the DMS event notification subscription.
CustSubscriptionId
— (String
)The DMS event notification subscription Id.
SnsTopicArn
— (String
)The topic ARN of the DMS event notification subscription.
Status
— (String
)The status of the DMS event notification subscription.
Constraints:
Can be one of the following: creating | modifying | deleting | active | no-permission | topic-not-exist
The status "no-permission" indicates that DMS no longer has permission to post to the SNS topic. The status "topic-not-exist" indicates that the topic was deleted after the subscription was created.
SubscriptionCreationTime
— (String
)The time the DMS event notification subscription was created.
SourceType
— (String
)The type of DMS resource that generates events.
Valid values: replication-instance | replication-server | security-group | replication-task
SourceIdsList
— (Array<String>
)A list of source Ids for the event subscription.
EventCategoriesList
— (Array<String>
)A lists of event categories.
Enabled
— (Boolean
)Boolean value that indicates if the event subscription is enabled.
-
(AWS.Response)
—
Returns:
deleteReplicationInstance(params = {}, callback) ⇒ AWS.Request
Deletes the specified replication instance.
Note: You must delete any migration tasks that are associated with the replication instance before you can delete it.Service Reference:
Examples:
Delete Replication Instance
/* Deletes the specified replication instance. You must delete any migration tasks that are associated with the replication instance before you can delete it. */ var params = { ReplicationInstanceArn: "arn:aws:dms:us-east-1:123456789012:rep:6UTDJGBOUS3VI3SUWA66XFJCJQ" }; dms.deleteReplicationInstance(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response /* data = { ReplicationInstance: { AllocatedStorage: 5, AutoMinorVersionUpgrade: true, EngineVersion: "1.5.0", KmsKeyId: "arn:aws:kms:us-east-1:123456789012:key/4c1731d6-5435-ed4d-be13-d53411a7cfbd", PendingModifiedValues: { }, PreferredMaintenanceWindow: "sun:06:00-sun:14:00", PubliclyAccessible: true, ReplicationInstanceArn: "arn:aws:dms:us-east-1:123456789012:rep:6UTDJGBOUS3VI3SUWA66XFJCJQ", ReplicationInstanceClass: "dms.t2.micro", ReplicationInstanceIdentifier: "test-rep-1", ReplicationInstanceStatus: "creating", ReplicationSubnetGroup: { ReplicationSubnetGroupDescription: "default", ReplicationSubnetGroupIdentifier: "default", SubnetGroupStatus: "Complete", Subnets: [ { SubnetAvailabilityZone: { Name: "us-east-1d" }, SubnetIdentifier: "subnet-f6dd91af", SubnetStatus: "Active" }, { SubnetAvailabilityZone: { Name: "us-east-1b" }, SubnetIdentifier: "subnet-3605751d", SubnetStatus: "Active" }, { SubnetAvailabilityZone: { Name: "us-east-1c" }, SubnetIdentifier: "subnet-c2daefb5", SubnetStatus: "Active" }, { SubnetAvailabilityZone: { Name: "us-east-1e" }, SubnetIdentifier: "subnet-85e90cb8", SubnetStatus: "Active" } ], VpcId: "vpc-6741a603" } } } */ });
Calling the deleteReplicationInstance operation
var params = { ReplicationInstanceArn: 'STRING_VALUE' /* required */ }; dms.deleteReplicationInstance(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
ReplicationInstanceArn
— (String
)The Amazon Resource Name (ARN) of the replication instance to be deleted.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:ReplicationInstance
— (map
)The replication instance that was deleted.
ReplicationInstanceIdentifier
— (String
)The replication instance identifier is a required parameter. This parameter is stored as a lowercase string.
Constraints:
-
Must contain 1-63 alphanumeric characters or hyphens.
-
First character must be a letter.
-
Cannot end with a hyphen or contain two consecutive hyphens.
Example:
myrepinstance
-
ReplicationInstanceClass
— (String
)The compute and memory capacity of the replication instance as defined for the specified replication instance class. It is a required parameter, although a default value is pre-selected in the DMS console.
For more information on the settings and capacities for the available replication instance classes, see Selecting the right DMS replication instance for your migration.
ReplicationInstanceStatus
— (String
)The status of the replication instance. The possible return values include:
-
"available"
-
"creating"
-
"deleted"
-
"deleting"
-
"failed"
-
"modifying"
-
"upgrading"
-
"rebooting"
-
"resetting-master-credentials"
-
"storage-full"
-
"incompatible-credentials"
-
"incompatible-network"
-
"maintenance"
-
AllocatedStorage
— (Integer
)The amount of storage (in gigabytes) that is allocated for the replication instance.
InstanceCreateTime
— (Date
)The time the replication instance was created.
VpcSecurityGroups
— (Array<map>
)The VPC security group for the instance.
VpcSecurityGroupId
— (String
)The VPC security group ID.
Status
— (String
)The status of the VPC security group.
AvailabilityZone
— (String
)The Availability Zone for the instance.
ReplicationSubnetGroup
— (map
)The subnet group for the replication instance.
ReplicationSubnetGroupIdentifier
— (String
)The identifier of the replication instance subnet group.
ReplicationSubnetGroupDescription
— (String
)A description for the replication subnet group.
VpcId
— (String
)The ID of the VPC.
SubnetGroupStatus
— (String
)The status of the subnet group.
Subnets
— (Array<map>
)The subnets that are in the subnet group.
SubnetIdentifier
— (String
)The subnet identifier.
SubnetAvailabilityZone
— (map
)The Availability Zone of the subnet.
Name
— (String
)The name of the Availability Zone.
SubnetStatus
— (String
)The status of the subnet.
PreferredMaintenanceWindow
— (String
)The maintenance window times for the replication instance. Any pending upgrades to the replication instance are performed during this time.
PendingModifiedValues
— (map
)The pending modification values.
ReplicationInstanceClass
— (String
)The compute and memory capacity of the replication instance as defined for the specified replication instance class.
For more information on the settings and capacities for the available replication instance classes, see Selecting the right DMS replication instance for your migration.
AllocatedStorage
— (Integer
)The amount of storage (in gigabytes) that is allocated for the replication instance.
MultiAZ
— (Boolean
)Specifies whether the replication instance is a Multi-AZ deployment. You can't set the
AvailabilityZone
parameter if the Multi-AZ parameter is set totrue
.EngineVersion
— (String
)The engine version number of the replication instance.
MultiAZ
— (Boolean
)Specifies whether the replication instance is a Multi-AZ deployment. You can't set the
AvailabilityZone
parameter if the Multi-AZ parameter is set totrue
.EngineVersion
— (String
)The engine version number of the replication instance.
If an engine version number is not specified when a replication instance is created, the default is the latest engine version available.
When modifying a major engine version of an instance, also set
AllowMajorVersionUpgrade
totrue
.AutoMinorVersionUpgrade
— (Boolean
)Boolean value indicating if minor version upgrades will be automatically applied to the instance.
KmsKeyId
— (String
)An KMS key identifier that is used to encrypt the data on the replication instance.
If you don't specify a value for the
KmsKeyId
parameter, then DMS uses your default encryption key.KMS creates the default encryption key for your Amazon Web Services account. Your Amazon Web Services account has a different default encryption key for each Amazon Web Services Region.
ReplicationInstanceArn
— (String
)The Amazon Resource Name (ARN) of the replication instance.
ReplicationInstancePublicIpAddress
— (String
)The public IP address of the replication instance.
ReplicationInstancePrivateIpAddress
— (String
)The private IP address of the replication instance.
ReplicationInstancePublicIpAddresses
— (Array<String>
)One or more public IP addresses for the replication instance.
ReplicationInstancePrivateIpAddresses
— (Array<String>
)One or more private IP addresses for the replication instance.
PubliclyAccessible
— (Boolean
)Specifies the accessibility options for the replication instance. A value of
true
represents an instance with a public IP address. A value offalse
represents an instance with a private IP address. The default value istrue
.SecondaryAvailabilityZone
— (String
)The Availability Zone of the standby replication instance in a Multi-AZ deployment.
FreeUntil
— (Date
)The expiration date of the free replication instance that is part of the Free DMS program.
DnsNameServers
— (String
)The DNS name servers supported for the replication instance to access your on-premise source or target database.
-
(AWS.Response)
—
Returns:
deleteReplicationSubnetGroup(params = {}, callback) ⇒ AWS.Request
Deletes a subnet group.
Service Reference:
Examples:
Delete Replication Subnet Group
/* Deletes a replication subnet group. */ var params = { ReplicationSubnetGroupIdentifier: "us-west-2ab-vpc-215ds366" }; dms.deleteReplicationSubnetGroup(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response /* data = { } */ });
Calling the deleteReplicationSubnetGroup operation
var params = { ReplicationSubnetGroupIdentifier: 'STRING_VALUE' /* required */ }; dms.deleteReplicationSubnetGroup(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
ReplicationSubnetGroupIdentifier
— (String
)The subnet group name of the replication instance.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs.
-
(AWS.Response)
—
Returns:
deleteReplicationTask(params = {}, callback) ⇒ AWS.Request
Deletes the specified replication task.
Service Reference:
Examples:
Delete Replication Task
/* Deletes the specified replication task. */ var params = { ReplicationTaskArn: "arn:aws:dms:us-east-1:123456789012:rep:6UTDJGBOUS3VI3SUWA66XFJCJQ" }; dms.deleteReplicationTask(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response /* data = { ReplicationTask: { MigrationType: "full-load", ReplicationInstanceArn: "arn:aws:dms:us-east-1:123456789012:rep:6UTDJGBOUS3VI3SUWA66XFJCJQ", ReplicationTaskArn: "arn:aws:dms:us-east-1:123456789012:task:OEAMB3NXSTZ6LFYZFEPPBBXPYM", ReplicationTaskCreationDate: <Date Representation>, ReplicationTaskIdentifier: "task1", ReplicationTaskSettings: "{\"TargetMetadata\":{\"TargetSchema\":\"\",\"SupportLobs\":true,\"FullLobMode\":true,\"LobChunkSize\":64,\"LimitedSizeLobMode\":false,\"LobMaxSize\":0},\"FullLoadSettings\":{\"FullLoadEnabled\":true,\"ApplyChangesEnabled\":false,\"TargetTablePrepMode\":\"DROP_AND_CREATE\",\"CreatePkAfterFullLoad\":false,\"StopTaskCachedChangesApplied\":false,\"StopTaskCachedChangesNotApplied\":false,\"ResumeEnabled\":false,\"ResumeMinTableSize\":100000,\"ResumeOnlyClusteredPKTables\":true,\"MaxFullLoadSubTasks\":8,\"TransactionConsistencyTimeout\":600,\"CommitRate\":10000},\"Logging\":{\"EnableLogging\":false}}", SourceEndpointArn: "arn:aws:dms:us-east-1:123456789012:endpoint:ZW5UAN6P4E77EC7YWHK4RZZ3BE", Status: "creating", TableMappings: "file://mappingfile.json", TargetEndpointArn: "arn:aws:dms:us-east-1:123456789012:endpoint:ASXWXJZLNWNT5HTWCGV2BUJQ7E" } } */ });
Calling the deleteReplicationTask operation
var params = { ReplicationTaskArn: 'STRING_VALUE' /* required */ }; dms.deleteReplicationTask(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
ReplicationTaskArn
— (String
)The Amazon Resource Name (ARN) of the replication task to be deleted.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:ReplicationTask
— (map
)The deleted replication task.
ReplicationTaskIdentifier
— (String
)The user-assigned replication task identifier or name.
Constraints:
-
Must contain 1-255 alphanumeric characters or hyphens.
-
First character must be a letter.
-
Cannot end with a hyphen or contain two consecutive hyphens.
-
SourceEndpointArn
— (String
)The Amazon Resource Name (ARN) that uniquely identifies the endpoint.
TargetEndpointArn
— (String
)The ARN that uniquely identifies the endpoint.
ReplicationInstanceArn
— (String
)The ARN of the replication instance.
MigrationType
— (String
)The type of migration.
Possible values include:"full-load"
"cdc"
"full-load-and-cdc"
TableMappings
— (String
)Table mappings specified in the task.
ReplicationTaskSettings
— (String
)The settings for the replication task.
Status
— (String
)The status of the replication task. This response parameter can return one of the following values:
-
"moving"
– The task is being moved in response to running theMoveReplicationTask
operation. -
"creating"
– The task is being created in response to running theCreateReplicationTask
operation. -
"deleting"
– The task is being deleted in response to running theDeleteReplicationTask
operation. -
"failed"
– The task failed to successfully complete the database migration in response to running theStartReplicationTask
operation. -
"failed-move"
– The task failed to move in response to running theMoveReplicationTask
operation. -
"modifying"
– The task definition is being modified in response to running theModifyReplicationTask
operation. -
"ready"
– The task is in aready
state where it can respond to other task operations, such asStartReplicationTask
orDeleteReplicationTask
. -
"running"
– The task is performing a database migration in response to running theStartReplicationTask
operation. -
"starting"
– The task is preparing to perform a database migration in response to running theStartReplicationTask
operation. -
"stopped"
– The task has stopped in response to running theStopReplicationTask
operation. -
"stopping"
– The task is preparing to stop in response to running theStopReplicationTask
operation. -
"testing"
– The database migration specified for this task is being tested in response to running either theStartReplicationTaskAssessmentRun
or theStartReplicationTaskAssessment
operation.Note:StartReplicationTaskAssessmentRun
is an improved premigration task assessment operation. TheStartReplicationTaskAssessment
operation assesses data type compatibility only between the source and target database of a given migration task. In contrast,StartReplicationTaskAssessmentRun
enables you to specify a variety of premigration task assessments in addition to data type compatibility. These assessments include ones for the validity of primary key definitions and likely issues with database migration performance, among others.
-
LastFailureMessage
— (String
)The last error (failure) message generated for the replication task.
StopReason
— (String
)The reason the replication task was stopped. This response parameter can return one of the following values:
-
"STOP_REASON_FULL_LOAD_COMPLETED"
– Full-load migration completed. -
"STOP_REASON_CACHED_CHANGES_APPLIED"
– Change data capture (CDC) load completed. -
"STOP_REASON_CACHED_CHANGES_NOT_APPLIED"
– In a full-load and CDC migration, the full load stopped as specified before starting the CDC migration. -
"STOP_REASON_SERVER_TIME"
– The migration stopped at the specified server time.
-
ReplicationTaskCreationDate
— (Date
)The date the replication task was created.
ReplicationTaskStartDate
— (Date
)The date the replication task is scheduled to start.
CdcStartPosition
— (String
)Indicates when you want a change data capture (CDC) operation to start. Use either
CdcStartPosition
orCdcStartTime
to specify when you want the CDC operation to start. Specifying both values results in an error.The value can be in date, checkpoint, or LSN/SCN format.
Date Example: --cdc-start-position “2018-03-08T12:12:12”
Checkpoint Example: --cdc-start-position "checkpoint:V1#27#mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:1876#0#0#*#0#93"
LSN Example: --cdc-start-position “mysql-bin-changelog.000024:373”
CdcStopPosition
— (String
)Indicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.
Server time example: --cdc-stop-position “server_time:2018-02-09T12:12:12”
Commit time example: --cdc-stop-position “commit_time: 2018-02-09T12:12:12 “
RecoveryCheckpoint
— (String
)Indicates the last checkpoint that occurred during a change data capture (CDC) operation. You can provide this value to the
CdcStartPosition
parameter to start a CDC operation that begins at that checkpoint.ReplicationTaskArn
— (String
)The Amazon Resource Name (ARN) of the replication task.
ReplicationTaskStats
— (map
)The statistics for the task, including elapsed time, tables loaded, and table errors.
FullLoadProgressPercent
— (Integer
)The percent complete for the full load migration task.
ElapsedTimeMillis
— (Integer
)The elapsed time of the task, in milliseconds.
TablesLoaded
— (Integer
)The number of tables loaded for this task.
TablesLoading
— (Integer
)The number of tables currently loading for this task.
TablesQueued
— (Integer
)The number of tables queued for this task.
TablesErrored
— (Integer
)The number of errors that have occurred during this task.
FreshStartDate
— (Date
)The date the replication task was started either with a fresh start or a target reload.
StartDate
— (Date
)The date the replication task was started either with a fresh start or a resume. For more information, see StartReplicationTaskType.
StopDate
— (Date
)The date the replication task was stopped.
FullLoadStartDate
— (Date
)The date the replication task full load was started.
FullLoadFinishDate
— (Date
)The date the replication task full load was completed.
TaskData
— (String
)Supplemental information that the task requires to migrate the data for certain source and target endpoints. For more information, see Specifying Supplemental Data for Task Settings in the Database Migration Service User Guide.
TargetReplicationInstanceArn
— (String
)The ARN of the replication instance to which this task is moved in response to running the
MoveReplicationTask
operation. Otherwise, this response parameter isn't a member of theReplicationTask
object.
-
(AWS.Response)
—
Returns:
deleteReplicationTaskAssessmentRun(params = {}, callback) ⇒ AWS.Request
Deletes the record of a single premigration assessment run.
This operation removes all metadata that DMS maintains about this assessment run. However, the operation leaves untouched all information about this assessment run that is stored in your Amazon S3 bucket.
Service Reference:
Examples:
Calling the deleteReplicationTaskAssessmentRun operation
var params = { ReplicationTaskAssessmentRunArn: 'STRING_VALUE' /* required */ }; dms.deleteReplicationTaskAssessmentRun(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
ReplicationTaskAssessmentRunArn
— (String
)Amazon Resource Name (ARN) of the premigration assessment run to be deleted.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:ReplicationTaskAssessmentRun
— (map
)The
ReplicationTaskAssessmentRun
object for the deleted assessment run.ReplicationTaskAssessmentRunArn
— (String
)Amazon Resource Name (ARN) of this assessment run.
ReplicationTaskArn
— (String
)ARN of the migration task associated with this premigration assessment run.
Status
— (String
)Assessment run status.
This status can have one of the following values:
-
"cancelling"
– The assessment run was canceled by theCancelReplicationTaskAssessmentRun
operation. -
"deleting"
– The assessment run was deleted by theDeleteReplicationTaskAssessmentRun
operation. -
"failed"
– At least one individual assessment completed with afailed
status. -
"error-provisioning"
– An internal error occurred while resources were provisioned (duringprovisioning
status). -
"error-executing"
– An internal error occurred while individual assessments ran (duringrunning
status). -
"invalid state"
– The assessment run is in an unknown state. -
"passed"
– All individual assessments have completed, and none has afailed
status. -
"provisioning"
– Resources required to run individual assessments are being provisioned. -
"running"
– Individual assessments are being run. -
"starting"
– The assessment run is starting, but resources are not yet being provisioned for individual assessments.
-
ReplicationTaskAssessmentRunCreationDate
— (Date
)Date on which the assessment run was created using the
StartReplicationTaskAssessmentRun
operation.AssessmentProgress
— (map
)Indication of the completion progress for the individual assessments specified to run.
IndividualAssessmentCount
— (Integer
)The number of individual assessments that are specified to run.
IndividualAssessmentCompletedCount
— (Integer
)The number of individual assessments that have completed, successfully or not.
LastFailureMessage
— (String
)Last message generated by an individual assessment failure.
ServiceAccessRoleArn
— (String
)ARN of the service role used to start the assessment run using the
StartReplicationTaskAssessmentRun
operation. The role must allow theiam:PassRole
action.ResultLocationBucket
— (String
)Amazon S3 bucket where DMS stores the results of this assessment run.
ResultLocationFolder
— (String
)Folder in an Amazon S3 bucket where DMS stores the results of this assessment run.
ResultEncryptionMode
— (String
)Encryption mode used to encrypt the assessment run results.
ResultKmsKeyArn
— (String
)ARN of the KMS encryption key used to encrypt the assessment run results.
AssessmentRunName
— (String
)Unique name of the assessment run.
-
(AWS.Response)
—
Returns:
describeAccountAttributes(params = {}, callback) ⇒ AWS.Request
Lists all of the DMS attributes for a customer account. These attributes include DMS quotas for the account and a unique account identifier in a particular DMS region. DMS quotas include a list of resource quotas supported by the account, such as the number of replication instances allowed. The description for each resource quota, includes the quota name, current usage toward that quota, and the quota's maximum value. DMS uses the unique account identifier to name each artifact used by DMS in the given region.
This command does not take any parameters.
Service Reference:
Examples:
Describe acount attributes
/* Lists all of the AWS DMS attributes for a customer account. The attributes include AWS DMS quotas for the account, such as the number of replication instances allowed. The description for a quota includes the quota name, current usage toward that quota, and the quota's maximum value. This operation does not take any parameters. */ var params = { }; dms.describeAccountAttributes(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response /* data = { AccountQuotas: [ { AccountQuotaName: "ReplicationInstances", Max: 20, Used: 0 }, { AccountQuotaName: "AllocatedStorage", Max: 20, Used: 0 }, { AccountQuotaName: "Endpoints", Max: 20, Used: 0 } ] } */ });
Calling the describeAccountAttributes operation
var params = { }; dms.describeAccountAttributes(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
- params (Object) (defaults to: {})
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:AccountQuotas
— (Array<map>
)Account quota information.
AccountQuotaName
— (String
)The name of the DMS quota for this Amazon Web Services account.
Used
— (Integer
)The amount currently used toward the quota maximum.
Max
— (Integer
)The maximum allowed value for the quota.
UniqueAccountIdentifier
— (String
)A unique DMS identifier for an account in a particular Amazon Web Services Region. The value of this identifier has the following format:
c99999999999
. DMS uses this identifier to name artifacts. For example, DMS uses this identifier to name the default Amazon S3 bucket for storing task assessment reports in a given Amazon Web Services Region. The format of this S3 bucket name is the following:dms-AccountNumber-UniqueAccountIdentifier.
Here is an example name for this default S3 bucket:dms-111122223333-c44445555666
.Note: DMS supports theUniqueAccountIdentifier
parameter in versions 3.1.4 and later.
-
(AWS.Response)
—
Returns:
describeApplicableIndividualAssessments(params = {}, callback) ⇒ AWS.Request
Provides a list of individual assessments that you can specify for a new premigration assessment run, given one or more parameters.
If you specify an existing migration task, this operation provides the default individual assessments you can specify for that task. Otherwise, the specified parameters model elements of a possible migration task on which to base a premigration assessment run.
To use these migration task modeling parameters, you must specify an existing replication instance, a source database engine, a target database engine, and a migration type. This combination of parameters potentially limits the default individual assessments available for an assessment run created for a corresponding migration task.
If you specify no parameters, this operation provides a list of all possible individual assessments that you can specify for an assessment run. If you specify any one of the task modeling parameters, you must specify all of them or the operation cannot provide a list of individual assessments. The only parameter that you can specify alone is for an existing migration task. The specified task definition then determines the default list of individual assessments that you can specify in an assessment run for the task.
Service Reference:
Examples:
Calling the describeApplicableIndividualAssessments operation
var params = { Marker: 'STRING_VALUE', MaxRecords: 'NUMBER_VALUE', MigrationType: full-load | cdc | full-load-and-cdc, ReplicationInstanceArn: 'STRING_VALUE', ReplicationTaskArn: 'STRING_VALUE', SourceEngineName: 'STRING_VALUE', TargetEngineName: 'STRING_VALUE' }; dms.describeApplicableIndividualAssessments(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
ReplicationTaskArn
— (String
)Amazon Resource Name (ARN) of a migration task on which you want to base the default list of individual assessments.
ReplicationInstanceArn
— (String
)ARN of a replication instance on which you want to base the default list of individual assessments.
SourceEngineName
— (String
)Name of a database engine that the specified replication instance supports as a source.
TargetEngineName
— (String
)Name of a database engine that the specified replication instance supports as a target.
MigrationType
— (String
)Name of the migration type that each provided individual assessment must support.
Possible values include:"full-load"
"cdc"
"full-load-and-cdc"
MaxRecords
— (Integer
)Maximum number of records to include in the response. If more records exist than the specified
MaxRecords
value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.Marker
— (String
)Optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:IndividualAssessmentNames
— (Array<String>
)List of names for the individual assessments supported by the premigration assessment run that you start based on the specified request parameters. For more information on the available individual assessments, including compatibility with different migration task configurations, see Working with premigration assessment runs in the Database Migration Service User Guide.
Marker
— (String
)Pagination token returned for you to pass to a subsequent request. If you pass this token as the
Marker
value in a subsequent request, the response includes only records beyond the marker, up to the value specified in the request byMaxRecords
.
-
(AWS.Response)
—
Returns:
describeCertificates(params = {}, callback) ⇒ AWS.Request
Provides a description of the certificate.
Service Reference:
Examples:
Describe certificates
/* Provides a description of the certificate. */ var params = { Filters: [ { Name: "string", Values: [ "string", "string" ] } ], Marker: "", MaxRecords: 123 }; dms.describeCertificates(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response /* data = { Certificates: [ ], Marker: "" } */ });
Calling the describeCertificates operation
var params = { Filters: [ { Name: 'STRING_VALUE', /* required */ Values: [ /* required */ 'STRING_VALUE', /* more items */ ] }, /* more items */ ], Marker: 'STRING_VALUE', MaxRecords: 'NUMBER_VALUE' }; dms.describeCertificates(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
Filters
— (Array<map>
)Filters applied to the certificates described in the form of key-value pairs.
Name
— required — (String
)The name of the filter as specified for a
Describe*
or similar operation.Values
— required — (Array<String>
)The filter value, which can specify one or more values used to narrow the returned results.
MaxRecords
— (Integer
)The maximum number of records to include in the response. If more records exist than the specified
MaxRecords
value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.Default: 10
Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:Marker
— (String
)The pagination token.
Certificates
— (Array<map>
)The Secure Sockets Layer (SSL) certificates associated with the replication instance.
CertificateIdentifier
— (String
)A customer-assigned name for the certificate. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens.
CertificateCreationDate
— (Date
)The date that the certificate was created.
CertificatePem
— (String
)The contents of a
.pem
file, which contains an X.509 certificate.CertificateWallet
— (Buffer, Typed Array, Blob, String
)The location of an imported Oracle Wallet certificate for use with SSL.
CertificateArn
— (String
)The Amazon Resource Name (ARN) for the certificate.
CertificateOwner
— (String
)The owner of the certificate.
ValidFromDate
— (Date
)The beginning date that the certificate is valid.
ValidToDate
— (Date
)The final date that the certificate is valid.
SigningAlgorithm
— (String
)The signing algorithm for the certificate.
KeyLength
— (Integer
)The key length of the cryptographic algorithm being used.
-
(AWS.Response)
—
Returns:
describeConnections(params = {}, callback) ⇒ AWS.Request
Describes the status of the connections that have been made between the replication instance and an endpoint. Connections are created when you test an endpoint.
Service Reference:
Examples:
Describe connections
/* Describes the status of the connections that have been made between the replication instance and an endpoint. Connections are created when you test an endpoint. */ var params = { Filters: [ { Name: "string", Values: [ "string", "string" ] } ], Marker: "", MaxRecords: 123 }; dms.describeConnections(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response /* data = { Connections: [ { EndpointArn: "arn:aws:dms:us-east-arn:aws:dms:us-east-1:123456789012:endpoint:ZW5UAN6P4E77EC7YWHK4RZZ3BE", EndpointIdentifier: "testsrc1", ReplicationInstanceArn: "arn:aws:dms:us-east-1:123456789012:rep:6UTDJGBOUS3VI3SUWA66XFJCJQ", ReplicationInstanceIdentifier: "test", Status: "successful" } ], Marker: "" } */ });
Calling the describeConnections operation
var params = { Filters: [ { Name: 'STRING_VALUE', /* required */ Values: [ /* required */ 'STRING_VALUE', /* more items */ ] }, /* more items */ ], Marker: 'STRING_VALUE', MaxRecords: 'NUMBER_VALUE' }; dms.describeConnections(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
Filters
— (Array<map>
)The filters applied to the connection.
Valid filter names: endpoint-arn | replication-instance-arn
Name
— required — (String
)The name of the filter as specified for a
Describe*
or similar operation.Values
— required — (Array<String>
)The filter value, which can specify one or more values used to narrow the returned results.
MaxRecords
— (Integer
)The maximum number of records to include in the response. If more records exist than the specified
MaxRecords
value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.Default: 100
Constraints: Minimum 20, maximum 100.
Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.Connections
— (Array<map>
)A description of the connections.
ReplicationInstanceArn
— (String
)The ARN of the replication instance.
EndpointArn
— (String
)The ARN string that uniquely identifies the endpoint.
Status
— (String
)The connection status. This parameter can return one of the following values:
-
"successful"
-
"testing"
-
"failed"
-
"deleting"
-
LastFailureMessage
— (String
)The error message when the connection last failed.
EndpointIdentifier
— (String
)The identifier of the endpoint. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens.
ReplicationInstanceIdentifier
— (String
)The replication instance identifier. This parameter is stored as a lowercase string.
-
(AWS.Response)
—
Returns:
Waiter Resource States:
describeEndpoints(params = {}, callback) ⇒ AWS.Request
Returns information about the endpoints for your account in the current region.
Service Reference:
Examples:
Describe endpoints
/* Returns information about the endpoints for your account in the current region. */ var params = { Filters: [ { Name: "string", Values: [ "string", "string" ] } ], Marker: "", MaxRecords: 123 }; dms.describeEndpoints(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response /* data = { Endpoints: [ ], Marker: "" } */ });
Calling the describeEndpoints operation
var params = { Filters: [ { Name: 'STRING_VALUE', /* required */ Values: [ /* required */ 'STRING_VALUE', /* more items */ ] }, /* more items */ ], Marker: 'STRING_VALUE', MaxRecords: 'NUMBER_VALUE' }; dms.describeEndpoints(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
Filters
— (Array<map>
)Filters applied to the endpoints.
Valid filter names: endpoint-arn | endpoint-type | endpoint-id | engine-name
Name
— required — (String
)The name of the filter as specified for a
Describe*
or similar operation.Values
— required — (Array<String>
)The filter value, which can specify one or more values used to narrow the returned results.
MaxRecords
— (Integer
)The maximum number of records to include in the response. If more records exist than the specified
MaxRecords
value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.Default: 100
Constraints: Minimum 20, maximum 100.
Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.Endpoints
— (Array<map>
)Endpoint description.
EndpointIdentifier
— (String
)The database endpoint identifier. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens.
EndpointType
— (String
)The type of endpoint. Valid values are
Possible values include:source
andtarget
."source"
"target"
EngineName
— (String
)The database engine name. Valid values, depending on the EndpointType, include
"mysql"
,"oracle"
,"postgres"
,"mariadb"
,"aurora"
,"aurora-postgresql"
,"redshift"
,"s3"
,"db2"
,"azuredb"
,"sybase"
,"dynamodb"
,"mongodb"
,"kinesis"
,"kafka"
,"elasticsearch"
,"documentdb"
,"sqlserver"
, and"neptune"
.EngineDisplayName
— (String
)The expanded name for the engine name. For example, if the
EngineName
parameter is "aurora," this value would be "Amazon Aurora MySQL."Username
— (String
)The user name used to connect to the endpoint.
ServerName
— (String
)The name of the server at the endpoint.
Port
— (Integer
)The port value used to access the endpoint.
DatabaseName
— (String
)The name of the database at the endpoint.
ExtraConnectionAttributes
— (String
)Additional connection attributes used to connect to the endpoint.
Status
— (String
)The status of the endpoint.
KmsKeyId
— (String
)An KMS key identifier that is used to encrypt the connection parameters for the endpoint.
If you don't specify a value for the
KmsKeyId
parameter, then DMS uses your default encryption key.KMS creates the default encryption key for your Amazon Web Services account. Your Amazon Web Services account has a different default encryption key for each Amazon Web Services Region.
EndpointArn
— (String
)The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
CertificateArn
— (String
)The Amazon Resource Name (ARN) used for SSL connection to the endpoint.
SslMode
— (String
)The SSL mode used to connect to the endpoint. The default value is
Possible values include:none
."none"
"require"
"verify-ca"
"verify-full"
ServiceAccessRoleArn
— (String
)The Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the
iam:PassRole
action.ExternalTableDefinition
— (String
)The external table definition.
ExternalId
— (String
)Value returned by a call to CreateEndpoint that can be used for cross-account validation. Use it on a subsequent call to CreateEndpoint to create the endpoint with a cross-account.
DynamoDbSettings
— (map
)The settings for the DynamoDB target endpoint. For more information, see the
DynamoDBSettings
structure.ServiceAccessRoleArn
— required — (String
)The Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the
iam:PassRole
action.
S3Settings
— (map
)The settings for the S3 target endpoint. For more information, see the
S3Settings
structure.ServiceAccessRoleArn
— (String
)The Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the
iam:PassRole
action. It is a required parameter that enables DMS to write and read objects from an S3 bucket.ExternalTableDefinition
— (String
)Specifies how tables are defined in the S3 source files only.
CsvRowDelimiter
— (String
)The delimiter used to separate rows in the .csv file for both source and target. The default is a carriage return (
\n
).CsvDelimiter
— (String
)The delimiter used to separate columns in the .csv file for both source and target. The default is a comma.
BucketFolder
— (String
)An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path
bucketFolder/schema_name/table_name/
. If this parameter isn't specified, then the path used isschema_name/table_name/
.BucketName
— (String
)The name of the S3 bucket.
CompressionType
— (String
)An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed. This parameter applies to both .csv and .parquet file formats.
Possible values include:"none"
"gzip"
EncryptionMode
— (String
)The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either
SSE_S3
(the default) orSSE_KMS
.Note: For theModifyEndpoint
operation, you can change the existing value of theEncryptionMode
parameter fromSSE_KMS
toSSE_S3
. But you can’t change the existing value fromSSE_S3
toSSE_KMS
.To use
SSE_S3
, you need an Identity and Access Management (IAM) role with permission to allow"arn:aws:s3:::dms-*"
to use the following actions:-
s3:CreateBucket
-
s3:ListBucket
-
s3:DeleteBucket
-
s3:GetBucketLocation
-
s3:GetObject
-
s3:PutObject
-
s3:DeleteObject
-
s3:GetObjectVersion
-
s3:GetBucketPolicy
-
s3:PutBucketPolicy
-
s3:DeleteBucketPolicy
"sse-s3"
"sse-kms"
-
ServerSideEncryptionKmsKeyId
— (String
)If you are using
SSE_KMS
for theEncryptionMode
, provide the KMS key ID. The key that you use needs an attached policy that enables Identity and Access Management (IAM) user permissions and allows use of the key.Here is a CLI example:
aws dms create-endpoint --endpoint-identifier value --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=value,BucketFolder=value,BucketName=value,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=value
DataFormat
— (String
)The format of the data that you want to use for output. You can choose one of the following:
-
csv
: This is a row-based file format with comma-separated values (.csv). -
parquet
: Apache Parquet (.parquet) is a columnar storage file format that features efficient compression and provides faster query response.
"csv"
"parquet"
-
EncodingType
— (String
)The type of encoding you are using:
-
RLE_DICTIONARY
uses a combination of bit-packing and run-length encoding to store repeated values more efficiently. This is the default. -
PLAIN
doesn't use encoding at all. Values are stored as they are. -
PLAIN_DICTIONARY
builds a dictionary of the values encountered in a given column. The dictionary is stored in a dictionary page for each column chunk.
"plain"
"plain-dictionary"
"rle-dictionary"
-
DictPageSizeLimit
— (Integer
)The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of
PLAIN
. This parameter defaults to 1024 * 1024 bytes (1 MiB), the maximum size of a dictionary page before it reverts toPLAIN
encoding. This size is used for .parquet file format only.RowGroupLength
— (Integer
)The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used for .parquet file format only.
If you choose a value larger than the maximum,
RowGroupLength
is set to the max row group length in bytes (64 * 1024 * 1024).DataPageSize
— (Integer
)The size of one data page in bytes. This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is used for .parquet file format only.
ParquetVersion
— (String
)The version of the Apache Parquet format that you want to use:
Possible values include:parquet_1_0
(the default) orparquet_2_0
."parquet-1-0"
"parquet-2-0"
EnableStatistics
— (Boolean
)A value that enables statistics for Parquet pages and row groups. Choose
true
to enable statistics,false
to disable. Statistics includeNULL
,DISTINCT
,MAX
, andMIN
values. This parameter defaults totrue
. This value is used for .parquet file format only.IncludeOpForFullLoad
— (Boolean
)A value that enables a full load to write INSERT operations to the comma-separated value (.csv) output files only to indicate how the rows were added to the source database.
Note: DMS supports theIncludeOpForFullLoad
parameter in versions 3.1.4 and later.For full load, records can only be inserted. By default (the
false
setting), no information is recorded in these output files for a full load to indicate that the rows were inserted at the source database. IfIncludeOpForFullLoad
is set totrue
ory
, the INSERT is recorded as an I annotation in the first field of the .csv file. This allows the format of your target records from a full load to be consistent with the target records from a CDC load.Note: This setting works together with theCdcInsertsOnly
and theCdcInsertsAndUpdates
parameters for output to .csv files only. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..CdcInsertsOnly
— (Boolean
)A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage (.parquet) output files. By default (the
false
setting), the first field in a .csv or .parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was inserted, updated, or deleted at the source database for a CDC load to the target.If
CdcInsertsOnly
is set totrue
ory
, only INSERTs from the source database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded depends on the value ofIncludeOpForFullLoad
. IfIncludeOpForFullLoad
is set totrue
, the first field of every CDC record is set to I to indicate the INSERT operation at the source. IfIncludeOpForFullLoad
is set tofalse
, every CDC record is written without a first field to indicate the INSERT operation at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..Note: DMS supports the interaction described preceding between theCdcInsertsOnly
andIncludeOpForFullLoad
parameters in versions 3.1.4 and later.CdcInsertsOnly
andCdcInsertsAndUpdates
can't both be set totrue
for the same endpoint. Set eitherCdcInsertsOnly
orCdcInsertsAndUpdates
totrue
for the same endpoint, but not both.TimestampColumnName
— (String
)A value that when nonblank causes DMS to add a column with timestamp information to the endpoint data for an Amazon S3 target.
Note: DMS supports theTimestampColumnName
parameter in versions 3.1.4 and later.DMS includes an additional
STRING
column in the .csv or .parquet object files of your migrated data when you setTimestampColumnName
to a nonblank value.For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS.
For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.
The string format for this timestamp column value is
yyyy-MM-dd HH:mm:ss.SSSSSS
. By default, the precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on the commit timestamp supported by DMS for the source database.When the
AddColumnName
parameter is set totrue
, DMS also includes a name for the timestamp column that you set withTimestampColumnName
.ParquetTimestampInMillisecond
— (Boolean
)A value that specifies the precision of any
TIMESTAMP
column values that are written to an Amazon S3 object file in .parquet format.Note: DMS supports theParquetTimestampInMillisecond
parameter in versions 3.1.4 and later.When
ParquetTimestampInMillisecond
is set totrue
ory
, DMS writes allTIMESTAMP
columns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes them with microsecond precision.Currently, Amazon Athena and Glue can handle only millisecond precision for
TIMESTAMP
values. Set this parameter totrue
for S3 endpoint object files that are .parquet formatted only if you plan to query or process the data with Athena or Glue.Note: DMS writes anyTIMESTAMP
column values written to an S3 file in .csv format with microsecond precision. SettingParquetTimestampInMillisecond
has no effect on the string format of the timestamp column value that is inserted by setting theTimestampColumnName
parameter.CdcInsertsAndUpdates
— (Boolean
)A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet (columnar storage) output files. The default setting is
false
, but whenCdcInsertsAndUpdates
is set totrue
ory
, only INSERTs and UPDATEs from the source database are migrated to the .csv or .parquet file.For .csv file format only, how these INSERTs and UPDATEs are recorded depends on the value of the
IncludeOpForFullLoad
parameter. IfIncludeOpForFullLoad
is set totrue
, the first field of every CDC record is set to eitherI
orU
to indicate INSERT and UPDATE operations at the source. But ifIncludeOpForFullLoad
is set tofalse
, CDC records are written without an indication of INSERT or UPDATE operations at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..Note: DMS supports the use of theCdcInsertsAndUpdates
parameter in versions 3.3.1 and later.CdcInsertsOnly
andCdcInsertsAndUpdates
can't both be set totrue
for the same endpoint. Set eitherCdcInsertsOnly
orCdcInsertsAndUpdates
totrue
for the same endpoint, but not both.DatePartitionEnabled
— (Boolean
)When set to
true
, this parameter partitions S3 bucket folders based on transaction commit dates. The default value isfalse
. For more information about date-based folder partitioning, see Using date-based folder partitioning.DatePartitionSequence
— (String
)Identifies the sequence of the date format to use during folder partitioning. The default value is
Possible values include:YYYYMMDD
. Use this parameter whenDatePartitionedEnabled
is set totrue
."YYYYMMDD"
"YYYYMMDDHH"
"YYYYMM"
"MMYYYYDD"
"DDMMYYYY"
DatePartitionDelimiter
— (String
)Specifies a date separating delimiter to use during folder partitioning. The default value is
Possible values include:SLASH
. Use this parameter whenDatePartitionedEnabled
is set totrue
."SLASH"
"UNDERSCORE"
"DASH"
"NONE"
UseCsvNoSupValue
— (Boolean
)This setting applies if the S3 output files during a change data capture (CDC) load are written in .csv format. If set to
true
for columns not included in the supplemental log, DMS uses the value specified byCsvNoSupValue
. If not set or set tofalse
, DMS uses the null value for these columns.Note: This setting is supported in DMS versions 3.4.1 and later.CsvNoSupValue
— (String
)This setting only applies if your Amazon S3 output files during a change data capture (CDC) load are written in .csv format. If
UseCsvNoSupValue
is set to true, specify a string value that you want DMS to use for all columns not included in the supplemental log. If you do not specify a string value, DMS uses the null value for these columns regardless of theUseCsvNoSupValue
setting.Note: This setting is supported in DMS versions 3.4.1 and later.PreserveTransactions
— (Boolean
)If set to
true
, DMS saves the transaction order for a change data capture (CDC) load on the Amazon S3 target specified byCdcPath
. For more information, see Capturing data changes (CDC) including transaction order on the S3 target.Note: This setting is supported in DMS versions 3.4.2 and later.CdcPath
— (String
)Specifies the folder path of CDC files. For an S3 source, this setting is required if a task captures change data; otherwise, it's optional. If
CdcPath
is set, DMS reads CDC files from this path and replicates the data changes to the target endpoint. For an S3 target if you setPreserveTransactions
totrue
, DMS verifies that you have set this parameter to a folder path on your S3 target where DMS can save the transaction order for the CDC load. DMS creates this CDC folder path in either your S3 target working directory or the S3 target location specified byBucketFolder
andBucketName
.For example, if you specify
CdcPath
asMyChangedData
, and you specifyBucketName
asMyTargetBucket
but do not specifyBucketFolder
, DMS creates the CDC folder path following:MyTargetBucket/MyChangedData
.If you specify the same
CdcPath
, and you specifyBucketName
asMyTargetBucket
andBucketFolder
asMyTargetData
, DMS creates the CDC folder path following:MyTargetBucket/MyTargetData/MyChangedData
.For more information on CDC including transaction order on an S3 target, see Capturing data changes (CDC) including transaction order on the S3 target.
Note: This setting is supported in DMS versions 3.4.2 and later.CannedAclForObjects
— (String
)A value that enables DMS to specify a predefined (canned) access control list for objects created in an Amazon S3 bucket as .csv or .parquet files. For more information about Amazon S3 canned ACLs, see Canned ACL in the Amazon S3 Developer Guide.
The default value is NONE. Valid values include NONE, PRIVATE, PUBLIC_READ, PUBLIC_READ_WRITE, AUTHENTICATED_READ, AWS_EXEC_READ, BUCKET_OWNER_READ, and BUCKET_OWNER_FULL_CONTROL.
Possible values include:"none"
"private"
"public-read"
"public-read-write"
"authenticated-read"
"aws-exec-read"
"bucket-owner-read"
"bucket-owner-full-control"
AddColumnName
— (Boolean
)An optional parameter that, when set to
true
ory
, you can use to add column name information to the .csv output file.The default value is
false
. Valid values aretrue
,false
,y
, andn
.CdcMaxBatchInterval
— (Integer
)Maximum length of the interval, defined in seconds, after which to output a file to Amazon S3.
When
CdcMaxBatchInterval
andCdcMinFileSize
are both specified, the file write is triggered by whichever parameter condition is met first within an DMS CloudFormation template.The default value is 60 seconds.
CdcMinFileSize
— (Integer
)Minimum file size, defined in megabytes, to reach for a file output to Amazon S3.
When
CdcMinFileSize
andCdcMaxBatchInterval
are both specified, the file write is triggered by whichever parameter condition is met first within an DMS CloudFormation template.The default value is 32 MB.
CsvNullValue
— (String
)An optional parameter that specifies how DMS treats null values. While handling the null value, you can use this parameter to pass a user-defined string as null when writing to the target. For example, when target columns are not nullable, you can use this option to differentiate between the empty string value and the null value. So, if you set this parameter value to the empty string ("" or ''), DMS treats the empty string as the null value instead of
NULL
.The default value is
NULL
. Valid values include any valid string.IgnoreHeaderRows
— (Integer
)When this value is set to 1, DMS ignores the first row header in a .csv file. A value of 1 turns on the feature; a value of 0 turns off the feature.
The default is 0.
MaxFileSize
— (Integer
)A value that specifies the maximum size (in KB) of any .csv file to be created while migrating to an S3 target during full load.
The default value is 1,048,576 KB (1 GB). Valid values include 1 to 1,048,576.
Rfc4180
— (Boolean
)For an S3 source, when this value is set to
true
ory
, each leading double quotation mark has to be followed by an ending double quotation mark. This formatting complies with RFC 4180. When this value is set tofalse
orn
, string literals are copied to the target as is. In this case, a delimiter (row or column) signals the end of the field. Thus, you can't use a delimiter as part of the string, because it signals the end of the value.For an S3 target, an optional parameter used to set behavior to comply with RFC 4180 for data migrated to Amazon S3 using .csv file format only. When this value is set to
true
ory
using Amazon S3 as a target, if the data has quotation marks or newline characters in it, DMS encloses the entire column with an additional pair of double quotation marks ("). Every quotation mark within the data is repeated twice.The default value is
true
. Valid values includetrue
,false
,y
, andn
.
DmsTransferSettings
— (map
)The settings in JSON format for the DMS transfer type of source endpoint.
Possible settings include the following:
-
ServiceAccessRoleArn
- - The Amazon Resource Name (ARN) used by the service access IAM role. The role must allow theiam:PassRole
action. -
BucketName
- The name of the S3 bucket to use.
Shorthand syntax for these settings is as follows:
ServiceAccessRoleArn=string,BucketName=string,
JSON syntax for these settings is as follows:
{ "ServiceAccessRoleArn": "string", "BucketName": "string"}
ServiceAccessRoleArn
— (String
)The Amazon Resource Name (ARN) used by the service access IAM role. The role must allow the
iam:PassRole
action.BucketName
— (String
)The name of the S3 bucket to use.
-
MongoDbSettings
— (map
)The settings for the MongoDB source endpoint. For more information, see the
MongoDbSettings
structure.Username
— (String
)The user name you use to access the MongoDB source endpoint.
Password
— (String
)The password for the user account you use to access the MongoDB source endpoint.
ServerName
— (String
)The name of the server on the MongoDB source endpoint.
Port
— (Integer
)The port value for the MongoDB source endpoint.
DatabaseName
— (String
)The database name on the MongoDB source endpoint.
AuthType
— (String
)The authentication type you use to access the MongoDB source endpoint.
When when set to
Possible values include:"no"
, user name and password parameters are not used and can be empty."no"
"password"
AuthMechanism
— (String
)The authentication mechanism you use to access the MongoDB source endpoint.
For the default value, in MongoDB version 2.x,
Possible values include:"default"
is"mongodb_cr"
. For MongoDB version 3.x or later,"default"
is"scram_sha_1"
. This setting isn't used whenAuthType
is set to"no"
."default"
"mongodb_cr"
"scram_sha_1"
NestingLevel
— (String
)Specifies either document or table mode.
Default value is
Possible values include:"none"
. Specify"none"
to use document mode. Specify"one"
to use table mode."none"
"one"
ExtractDocId
— (String
)Specifies the document ID. Use this setting when
NestingLevel
is set to"none"
.Default value is
"false"
.DocsToInvestigate
— (String
)Indicates the number of documents to preview to determine the document organization. Use this setting when
NestingLevel
is set to"one"
.Must be a positive value greater than
0
. Default value is1000
.AuthSource
— (String
)The MongoDB database name. This setting isn't used when
AuthType
is set to"no"
.The default is
"admin"
.KmsKeyId
— (String
)The KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the
KmsKeyId
parameter, then DMS uses your default encryption key. KMS creates the default encryption key for your Amazon Web Services account. Your Amazon Web Services account has a different default encryption key for each Amazon Web Services Region.SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the MongoDB endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the MongoDB endpoint connection details.
KinesisSettings
— (map
)The settings for the Amazon Kinesis target endpoint. For more information, see the
KinesisSettings
structure.StreamArn
— (String
)The Amazon Resource Name (ARN) for the Amazon Kinesis Data Streams endpoint.
MessageFormat
— (String
)The output format for the records created on the endpoint. The message format is
Possible values include:JSON
(default) orJSON_UNFORMATTED
(a single line with no tab)."json"
"json-unformatted"
ServiceAccessRoleArn
— (String
)The Amazon Resource Name (ARN) for the IAM role that DMS uses to write to the Kinesis data stream. The role must allow the
iam:PassRole
action.IncludeTransactionDetails
— (Boolean
)Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for
transaction_id
, previoustransaction_id
, andtransaction_record_id
(the record offset within a transaction). The default isfalse
.IncludePartitionValue
— (Boolean
)Shows the partition value within the Kinesis message output, unless the partition type is
schema-table-type
. The default isfalse
.PartitionIncludeSchemaTable
— (Boolean
)Prefixes schema and table names to partition values, when the partition type is
primary-key-type
. Doing this increases data distribution among Kinesis shards. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same shard, which causes throttling. The default isfalse
.IncludeTableAlterOperations
— (Boolean
)Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table
,drop-table
,add-column
,drop-column
, andrename-column
. The default isfalse
.IncludeControlDetails
— (Boolean
)Shows detailed control information for table definition, column definition, and table and column changes in the Kinesis message output. The default is
false
.IncludeNullAndEmpty
— (Boolean
)Include NULL and empty columns for records migrated to the endpoint. The default is
false
.NoHexPrefix
— (Boolean
)Set this optional parameter to
true
to avoid adding a '0x' prefix to raw data in hexadecimal format. For example, by default, DMS adds a '0x' prefix to the LOB column type in hexadecimal format moving from an Oracle source to an Amazon Kinesis target. Use theNoHexPrefix
endpoint setting to enable migration of RAW data type columns without adding the '0x' prefix.
KafkaSettings
— (map
)The settings for the Apache Kafka target endpoint. For more information, see the
KafkaSettings
structure.Broker
— (String
)A comma-separated list of one or more broker locations in your Kafka cluster that host your Kafka instance. Specify each broker location in the form
broker-hostname-or-ip:port
. For example,"ec2-12-345-678-901.compute-1.amazonaws.com:2345"
. For more information and examples of specifying a list of broker locations, see Using Apache Kafka as a target for Database Migration Service in the Database Migration Service User Guide.Topic
— (String
)The topic to which you migrate the data. If you don't specify a topic, DMS specifies
"kafka-default-topic"
as the migration topic.MessageFormat
— (String
)The output format for the records created on the endpoint. The message format is
Possible values include:JSON
(default) orJSON_UNFORMATTED
(a single line with no tab)."json"
"json-unformatted"
IncludeTransactionDetails
— (Boolean
)Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for
transaction_id
, previoustransaction_id
, andtransaction_record_id
(the record offset within a transaction). The default isfalse
.IncludePartitionValue
— (Boolean
)Shows the partition value within the Kafka message output unless the partition type is
schema-table-type
. The default isfalse
.PartitionIncludeSchemaTable
— (Boolean
)Prefixes schema and table names to partition values, when the partition type is
primary-key-type
. Doing this increases data distribution among Kafka partitions. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same partition, which causes throttling. The default isfalse
.IncludeTableAlterOperations
— (Boolean
)Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table
,drop-table
,add-column
,drop-column
, andrename-column
. The default isfalse
.IncludeControlDetails
— (Boolean
)Shows detailed control information for table definition, column definition, and table and column changes in the Kafka message output. The default is
false
.MessageMaxBytes
— (Integer
)The maximum size in bytes for records created on the endpoint The default is 1,000,000.
IncludeNullAndEmpty
— (Boolean
)Include NULL and empty columns for records migrated to the endpoint. The default is
false
.SecurityProtocol
— (String
)Set secure connection to a Kafka target endpoint using Transport Layer Security (TLS). Options include
Possible values include:ssl-encryption
,ssl-authentication
, andsasl-ssl
.sasl-ssl
requiresSaslUsername
andSaslPassword
."plaintext"
"ssl-authentication"
"ssl-encryption"
"sasl-ssl"
SslClientCertificateArn
— (String
)The Amazon Resource Name (ARN) of the client certificate used to securely connect to a Kafka target endpoint.
SslClientKeyArn
— (String
)The Amazon Resource Name (ARN) for the client private key used to securely connect to a Kafka target endpoint.
SslClientKeyPassword
— (String
)The password for the client private key used to securely connect to a Kafka target endpoint.
SslCaCertificateArn
— (String
)The Amazon Resource Name (ARN) for the private certificate authority (CA) cert that DMS uses to securely connect to your Kafka target endpoint.
SaslUsername
— (String
)The secure user name you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.
SaslPassword
— (String
)The secure password you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.
NoHexPrefix
— (Boolean
)Set this optional parameter to
true
to avoid adding a '0x' prefix to raw data in hexadecimal format. For example, by default, DMS adds a '0x' prefix to the LOB column type in hexadecimal format moving from an Oracle source to a Kafka target. Use theNoHexPrefix
endpoint setting to enable migration of RAW data type columns without adding the '0x' prefix.
ElasticsearchSettings
— (map
)The settings for the Elasticsearch source endpoint. For more information, see the
ElasticsearchSettings
structure.ServiceAccessRoleArn
— required — (String
)The Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the
iam:PassRole
action.EndpointUri
— required — (String
)The endpoint for the Elasticsearch cluster. DMS uses HTTPS if a transport protocol (http/https) is not specified.
FullLoadErrorPercentage
— (Integer
)The maximum percentage of records that can fail to be written before a full load operation stops.
To avoid early failure, this counter is only effective after 1000 records are transferred. Elasticsearch also has the concept of error monitoring during the last 10 minutes of an Observation Window. If transfer of all records fail in the last 10 minutes, the full load operation stops.
ErrorRetryDuration
— (Integer
)The maximum number of seconds for which DMS retries failed API requests to the Elasticsearch cluster.
NeptuneSettings
— (map
)The settings for the Amazon Neptune target endpoint. For more information, see the
NeptuneSettings
structure.ServiceAccessRoleArn
— (String
)The Amazon Resource Name (ARN) of the service role that you created for the Neptune target endpoint. The role must allow the
iam:PassRole
action. For more information, see Creating an IAM Service Role for Accessing Amazon Neptune as a Target in the Database Migration Service User Guide.S3BucketName
— required — (String
)The name of the Amazon S3 bucket where DMS can temporarily store migrated graph data in .csv files before bulk-loading it to the Neptune target database. DMS maps the SQL source data to graph data before storing it in these .csv files.
S3BucketFolder
— required — (String
)A folder path where you want DMS to store migrated graph data in the S3 bucket specified by
S3BucketName
ErrorRetryDuration
— (Integer
)The number of milliseconds for DMS to wait to retry a bulk-load of migrated graph data to the Neptune target database before raising an error. The default is 250.
MaxFileSize
— (Integer
)The maximum size in kilobytes of migrated graph data stored in a .csv file before DMS bulk-loads the data to the Neptune target database. The default is 1,048,576 KB. If the bulk load is successful, DMS clears the bucket, ready to store the next batch of migrated graph data.
MaxRetryCount
— (Integer
)The number of times for DMS to retry a bulk load of migrated graph data to the Neptune target database before raising an error. The default is 5.
IamAuthEnabled
— (Boolean
)If you want Identity and Access Management (IAM) authorization enabled for this endpoint, set this parameter to
true
. Then attach the appropriate IAM policy document to your service role specified byServiceAccessRoleArn
. The default isfalse
.
RedshiftSettings
— (map
)Settings for the Amazon Redshift endpoint.
AcceptAnyDate
— (Boolean
)A value that indicates to allow any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error. You can choose
true
orfalse
(the default).This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data doesn't match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.
AfterConnectScript
— (String
)Code to run after connecting. This parameter should contain the code itself, not the name of a file containing the code.
BucketFolder
— (String
)An S3 folder where the comma-separated-value (.csv) files are stored before being uploaded to the target Redshift cluster.
For full load mode, DMS converts source records into .csv files and loads them to the BucketFolder/TableID path. DMS uses the Redshift
COPY
command to upload the .csv files to the target table. The files are deleted once theCOPY
operation has finished. For more information, see COPY in the Amazon Redshift Database Developer Guide.For change-data-capture (CDC) mode, DMS creates a NetChanges table, and loads the .csv files to this BucketFolder/NetChangesTableID path.
BucketName
— (String
)The name of the intermediate S3 bucket used to store .csv files before uploading data to Redshift.
CaseSensitiveNames
— (Boolean
)If Amazon Redshift is configured to support case sensitive schema names, set
CaseSensitiveNames
totrue
. The default isfalse
.CompUpdate
— (Boolean
)If you set
CompUpdate
totrue
Amazon Redshift applies automatic compression if the table is empty. This applies even if the table columns already have encodings other thanRAW
. If you setCompUpdate
tofalse
, automatic compression is disabled and existing column encodings aren't changed. The default istrue
.ConnectionTimeout
— (Integer
)A value that sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.
DatabaseName
— (String
)The name of the Amazon Redshift data warehouse (service) that you are working with.
DateFormat
— (String
)The date format that you are using. Valid values are
auto
(case-sensitive), your date format string enclosed in quotes, or NULL. If this parameter is left unset (NULL), it defaults to a format of 'YYYY-MM-DD'. Usingauto
recognizes most strings, even some that aren't supported when you use a date format string.If your date and time values use formats different from each other, set this to
auto
.EmptyAsNull
— (Boolean
)A value that specifies whether DMS should migrate empty CHAR and VARCHAR fields as NULL. A value of
true
sets empty CHAR and VARCHAR fields to null. The default isfalse
.EncryptionMode
— (String
)The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either
SSE_S3
(the default) orSSE_KMS
.Note: For theModifyEndpoint
operation, you can change the existing value of theEncryptionMode
parameter fromSSE_KMS
toSSE_S3
. But you can’t change the existing value fromSSE_S3
toSSE_KMS
.To use
Possible values include:SSE_S3
, create an Identity and Access Management (IAM) role with a policy that allows"arn:aws:s3:::*"
to use the following actions:"s3:PutObject", "s3:ListBucket"
"sse-s3"
"sse-kms"
ExplicitIds
— (Boolean
)This setting is only valid for a full-load migration task. Set
ExplicitIds
totrue
to have tables withIDENTITY
columns override their auto-generated values with explicit values loaded from the source data files used to populate the tables. The default isfalse
.FileTransferUploadStreams
— (Integer
)The number of threads used to upload a single file. This parameter accepts a value from 1 through 64. It defaults to 10.
The number of parallel streams used to upload a single .csv file to an S3 bucket using S3 Multipart Upload. For more information, see Multipart upload overview.
FileTransferUploadStreams
accepts a value from 1 through 64. It defaults to 10.LoadTimeout
— (Integer
)The amount of time to wait (in milliseconds) before timing out of operations performed by DMS on a Redshift cluster, such as Redshift COPY, INSERT, DELETE, and UPDATE.
MaxFileSize
— (Integer
)The maximum size (in KB) of any .csv file used to load data on an S3 bucket and transfer data to Amazon Redshift. It defaults to 1048576KB (1 GB).
Password
— (String
)The password for the user named in the
username
property.Port
— (Integer
)The port number for Amazon Redshift. The default value is 5439.
RemoveQuotes
— (Boolean
)A value that specifies to remove surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained. Choose
true
to remove quotation marks. The default isfalse
.ReplaceInvalidChars
— (String
)A list of characters that you want to replace. Use with
ReplaceChars
.ReplaceChars
— (String
)A value that specifies to replaces the invalid characters specified in
ReplaceInvalidChars
, substituting the specified characters instead. The default is"?"
.ServerName
— (String
)The name of the Amazon Redshift cluster you are using.
ServiceAccessRoleArn
— (String
)The Amazon Resource Name (ARN) of the IAM role that has access to the Amazon Redshift service. The role must allow the
iam:PassRole
action.ServerSideEncryptionKmsKeyId
— (String
)The KMS key ID. If you are using
SSE_KMS
for theEncryptionMode
, provide this key ID. The key that you use needs an attached policy that enables IAM user permissions and allows use of the key.TimeFormat
— (String
)The time format that you want to use. Valid values are
auto
(case-sensitive),'timeformat_string'
,'epochsecs'
, or'epochmillisecs'
. It defaults to 10. Usingauto
recognizes most strings, even some that aren't supported when you use a time format string.If your date and time values use formats different from each other, set this parameter to
auto
.TrimBlanks
— (Boolean
)A value that specifies to remove the trailing white space characters from a VARCHAR string. This parameter applies only to columns with a VARCHAR data type. Choose
true
to remove unneeded white space. The default isfalse
.TruncateColumns
— (Boolean
)A value that specifies to truncate data in columns to the appropriate number of characters, so that the data fits in the column. This parameter applies only to columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. Choose
true
to truncate data. The default isfalse
.Username
— (String
)An Amazon Redshift user name for a registered user.
WriteBufferSize
— (Integer
)The size (in KB) of the in-memory file write buffer used when generating .csv files on the local disk at the DMS replication instance. The default value is 1000 (buffer size is 1000KB).
SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the Amazon Redshift endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the Amazon Redshift endpoint connection details.
PostgreSQLSettings
— (map
)The settings for the PostgreSQL source and target endpoint. For more information, see the
PostgreSQLSettings
structure.AfterConnectScript
— (String
)For use with change data capture (CDC) only, this attribute has DMS bypass foreign keys and user triggers to reduce the time it takes to bulk load data.
Example:
afterConnectScript=SET session_replication_role='replica'
CaptureDdls
— (Boolean
)To capture DDL events, DMS creates various artifacts in the PostgreSQL database when the task starts. You can later remove these artifacts.
If this value is set to
N
, you don't have to create tables or triggers on the source database.MaxFileSize
— (Integer
)Specifies the maximum size (in KB) of any .csv file used to transfer data to PostgreSQL.
Example:
maxFileSize=512
DatabaseName
— (String
)Database name for the endpoint.
DdlArtifactsSchema
— (String
)The schema in which the operational DDL database artifacts are created.
Example:
ddlArtifactsSchema=xyzddlschema;
ExecuteTimeout
— (Integer
)Sets the client statement timeout for the PostgreSQL instance, in seconds. The default value is 60 seconds.
Example:
executeTimeout=100;
FailTasksOnLobTruncation
— (Boolean
)When set to
true
, this value causes a task to fail if the actual size of a LOB column is greater than the specifiedLobMaxSize
.If task is set to Limited LOB mode and this option is set to true, the task fails instead of truncating the LOB data.
HeartbeatEnable
— (Boolean
)The write-ahead log (WAL) heartbeat feature mimics a dummy transaction. By doing this, it prevents idle logical replication slots from holding onto old WAL logs, which can result in storage full situations on the source. This heartbeat keeps
restart_lsn
moving and prevents storage full scenarios.HeartbeatSchema
— (String
)Sets the schema in which the heartbeat artifacts are created.
HeartbeatFrequency
— (Integer
)Sets the WAL heartbeat frequency (in minutes).
Password
— (String
)Endpoint connection password.
Port
— (Integer
)Endpoint TCP port.
ServerName
— (String
)Fully qualified domain name of the endpoint.
Username
— (String
)Endpoint connection user name.
SlotName
— (String
)Sets the name of a previously created logical replication slot for a change data capture (CDC) load of the PostgreSQL source instance.
When used with the
CdcStartPosition
request parameter for the DMS API , this attribute also makes it possible to use native CDC start points. DMS verifies that the specified logical replication slot exists before starting the CDC load task. It also verifies that the task was created with a valid setting ofCdcStartPosition
. If the specified slot doesn't exist or the task doesn't have a validCdcStartPosition
setting, DMS raises an error.For more information about setting the
CdcStartPosition
request parameter, see Determining a CDC native start point in the Database Migration Service User Guide. For more information about usingCdcStartPosition
, see CreateReplicationTask, StartReplicationTask, and ModifyReplicationTask.PluginName
— (String
)Specifies the plugin to use to create a replication slot.
Possible values include:"no-preference"
"test-decoding"
"pglogical"
SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the PostgreSQL endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the PostgreSQL endpoint connection details.
MySQLSettings
— (map
)The settings for the MySQL source and target endpoint. For more information, see the
MySQLSettings
structure.AfterConnectScript
— (String
)Specifies a script to run immediately after DMS connects to the endpoint. The migration task continues running regardless if the SQL statement succeeds or fails.
For this parameter, provide the code of the script itself, not the name of a file containing the script.
CleanSourceMetadataOnMismatch
— (Boolean
)Adjusts the behavior of DMS when migrating from an SQL Server source database that is hosted as part of an Always On availability group cluster. If you need DMS to poll all the nodes in the Always On cluster for transaction backups, set this attribute to
false
.DatabaseName
— (String
)Database name for the endpoint. For a MySQL source or target endpoint, don't explicitly specify the database using the
DatabaseName
request parameter on either theCreateEndpoint
orModifyEndpoint
API call. SpecifyingDatabaseName
when you create or modify a MySQL endpoint replicates all the task tables to this single database. For MySQL endpoints, you specify the database only when you specify the schema in the table-mapping rules of the DMS task.EventsPollInterval
— (Integer
)Specifies how often to check the binary log for new changes/events when the database is idle.
Example:
eventsPollInterval=5;
In the example, DMS checks for changes in the binary logs every five seconds.
TargetDbType
— (String
)Specifies where to migrate source tables on the target, either to a single database or multiple databases.
Example:
Possible values include:targetDbType=MULTIPLE_DATABASES
"specific-database"
"multiple-databases"
MaxFileSize
— (Integer
)Specifies the maximum size (in KB) of any .csv file used to transfer data to a MySQL-compatible database.
Example:
maxFileSize=512
ParallelLoadThreads
— (Integer
)Improves performance when loading data into the MySQL-compatible target database. Specifies how many threads to use to load the data into the MySQL-compatible target database. Setting a large number of threads can have an adverse effect on database performance, because a separate connection is required for each thread.
Example:
parallelLoadThreads=1
Password
— (String
)Endpoint connection password.
Port
— (Integer
)Endpoint TCP port.
ServerName
— (String
)Fully qualified domain name of the endpoint.
ServerTimezone
— (String
)Specifies the time zone for the source MySQL database.
Example:
serverTimezone=US/Pacific;
Note: Do not enclose time zones in single quotes.
Username
— (String
)Endpoint connection user name.
SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the MySQL endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the MySQL endpoint connection details.
OracleSettings
— (map
)The settings for the Oracle source and target endpoint. For more information, see the
OracleSettings
structure.AddSupplementalLogging
— (Boolean
)Set this attribute to set up table-level supplemental logging for the Oracle database. This attribute enables PRIMARY KEY supplemental logging on all tables selected for a migration task.
If you use this option, you still need to enable database-level supplemental logging.
ArchivedLogDestId
— (Integer
)Specifies the ID of the destination for the archived redo logs. This value should be the same as a number in the dest_id column of the v$archived_log view. If you work with an additional redo log destination, use the
AdditionalArchivedLogDestId
option to specify the additional destination ID. Doing this improves performance by ensuring that the correct logs are accessed from the outset.AdditionalArchivedLogDestId
— (Integer
)Set this attribute with
ArchivedLogDestId
in a primary/ standby setup. This attribute is useful in the case of a switchover. In this case, DMS needs to know which destination to get archive redo logs from to read changes. This need arises because the previous primary instance is now a standby instance after switchover.Although DMS supports the use of the Oracle
RESETLOGS
option to open the database, never useRESETLOGS
unless necessary. For additional information aboutRESETLOGS
, see RMAN Data Repair Concepts in the Oracle Database Backup and Recovery User's Guide.ExtraArchivedLogDestIds
— (Array<Integer>
)Specifies the IDs of one more destinations for one or more archived redo logs. These IDs are the values of the
dest_id
column in thev$archived_log
view. Use this setting with thearchivedLogDestId
extra connection attribute in a primary-to-single setup or a primary-to-multiple-standby setup.This setting is useful in a switchover when you use an Oracle Data Guard database as a source. In this case, DMS needs information about what destination to get archive redo logs from to read changes. DMS needs this because after the switchover the previous primary is a standby instance. For example, in a primary-to-single standby setup you might apply the following settings.
archivedLogDestId=1; ExtraArchivedLogDestIds=[2]
In a primary-to-multiple-standby setup, you might apply the following settings.
archivedLogDestId=1; ExtraArchivedLogDestIds=[2,3,4]
Although DMS supports the use of the Oracle
RESETLOGS
option to open the database, never useRESETLOGS
unless it's necessary. For more information aboutRESETLOGS
, see RMAN Data Repair Concepts in the Oracle Database Backup and Recovery User's Guide.AllowSelectNestedTables
— (Boolean
)Set this attribute to
true
to enable replication of Oracle tables containing columns that are nested tables or defined types.ParallelAsmReadThreads
— (Integer
)Set this attribute to change the number of threads that DMS configures to perform a change data capture (CDC) load using Oracle Automatic Storage Management (ASM). You can specify an integer value between 2 (the default) and 8 (the maximum). Use this attribute together with the
readAheadBlocks
attribute.ReadAheadBlocks
— (Integer
)Set this attribute to change the number of read-ahead blocks that DMS configures to perform a change data capture (CDC) load using Oracle Automatic Storage Management (ASM). You can specify an integer value between 1000 (the default) and 200,000 (the maximum).
AccessAlternateDirectly
— (Boolean
)Set this attribute to
false
in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This tells the DMS instance to not access redo logs through any specified path prefix replacement using direct file access.UseAlternateFolderForOnline
— (Boolean
)Set this attribute to
true
in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This tells the DMS instance to use any specified prefix replacement to access all online redo logs.OraclePathPrefix
— (String
)Set this string attribute to the required value in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This value specifies the default Oracle root used to access the redo logs.
UsePathPrefix
— (String
)Set this string attribute to the required value in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This value specifies the path prefix used to replace the default Oracle root to access the redo logs.
ReplacePathPrefix
— (Boolean
)Set this attribute to true in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This setting tells DMS instance to replace the default Oracle root with the specified
usePathPrefix
setting to access the redo logs.EnableHomogenousTablespace
— (Boolean
)Set this attribute to enable homogenous tablespace replication and create existing tables or indexes under the same tablespace on the target.
DirectPathNoLog
— (Boolean
)When set to
true
, this attribute helps to increase the commit rate on the Oracle target database by writing directly to tables and not writing a trail to database logs.ArchivedLogsOnly
— (Boolean
)When this field is set to
Y
, DMS only accesses the archived redo logs. If the archived redo logs are stored on Oracle ASM only, the DMS user account needs to be granted ASM privileges.AsmPassword
— (String
)For an Oracle source endpoint, your Oracle Automatic Storage Management (ASM) password. You can set this value from the
asm_user_password
value. You set this value as part of the comma-separated value that you set to thePassword
request parameter when you create the endpoint to access transaction logs using Binary Reader. For more information, see Configuration for change data capture (CDC) on an Oracle source database.AsmServer
— (String
)For an Oracle source endpoint, your ASM server address. You can set this value from the
asm_server
value. You setasm_server
as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database.AsmUser
— (String
)For an Oracle source endpoint, your ASM user name. You can set this value from the
asm_user
value. You setasm_user
as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database.CharLengthSemantics
— (String
)Specifies whether the length of a character column is in bytes or in characters. To indicate that the character column length is in characters, set this attribute to
CHAR
. Otherwise, the character column length is in bytes.Example:
Possible values include:charLengthSemantics=CHAR;
"default"
"char"
"byte"
DatabaseName
— (String
)Database name for the endpoint.
DirectPathParallelLoad
— (Boolean
)When set to
true
, this attribute specifies a parallel load whenuseDirectPathFullLoad
is set toY
. This attribute also only applies when you use the DMS parallel load feature. Note that the target table cannot have any constraints or indexes.FailTasksOnLobTruncation
— (Boolean
)When set to
true
, this attribute causes a task to fail if the actual size of an LOB column is greater than the specifiedLobMaxSize
.If a task is set to limited LOB mode and this option is set to
true
, the task fails instead of truncating the LOB data.NumberDatatypeScale
— (Integer
)Specifies the number scale. You can select a scale up to 38, or you can select FLOAT. By default, the NUMBER data type is converted to precision 38, scale 10.
Example:
numberDataTypeScale=12
Password
— (String
)Endpoint connection password.
Port
— (Integer
)Endpoint TCP port.
ReadTableSpaceName
— (Boolean
)When set to
true
, this attribute supports tablespace replication.RetryInterval
— (Integer
)Specifies the number of seconds that the system waits before resending a query.
Example:
retryInterval=6;
SecurityDbEncryption
— (String
)For an Oracle source endpoint, the transparent data encryption (TDE) password required by AWM DMS to access Oracle redo logs encrypted by TDE using Binary Reader. It is also the
TDE_Password
part of the comma-separated value you set to thePassword
request parameter when you create the endpoint. TheSecurityDbEncryptian
setting is related to thisSecurityDbEncryptionName
setting. For more information, see Supported encryption methods for using Oracle as a source for DMS in the Database Migration Service User Guide.SecurityDbEncryptionName
— (String
)For an Oracle source endpoint, the name of a key used for the transparent data encryption (TDE) of the columns and tablespaces in an Oracle source database that is encrypted using TDE. The key value is the value of the
SecurityDbEncryption
setting. For more information on setting the key name value ofSecurityDbEncryptionName
, see the information and example for setting thesecurityDbEncryptionName
extra connection attribute in Supported encryption methods for using Oracle as a source for DMS in the Database Migration Service User Guide.ServerName
— (String
)Fully qualified domain name of the endpoint.
SpatialDataOptionToGeoJsonFunctionName
— (String
)Use this attribute to convert
SDO_GEOMETRY
toGEOJSON
format. By default, DMS calls theSDO2GEOJSON
custom function if present and accessible. Or you can create your own custom function that mimics the operation ofSDOGEOJSON
and setSpatialDataOptionToGeoJsonFunctionName
to call it instead.StandbyDelayTime
— (Integer
)Use this attribute to specify a time in minutes for the delay in standby sync. If the source is an Oracle Active Data Guard standby database, use this attribute to specify the time lag between primary and standby databases.
In DMS, you can create an Oracle CDC task that uses an Active Data Guard standby instance as a source for replicating ongoing changes. Doing this eliminates the need to connect to an active database that might be in production.
Username
— (String
)Endpoint connection user name.
UseBFile
— (Boolean
)Set this attribute to Y to capture change data using the Binary Reader utility. Set
UseLogminerReader
to N to set this attribute to Y. To use Binary Reader with Amazon RDS for Oracle as the source, you set additional attributes. For more information about using this setting with Oracle Automatic Storage Management (ASM), see Using Oracle LogMiner or DMS Binary Reader for CDC.UseDirectPathFullLoad
— (Boolean
)Set this attribute to Y to have DMS use a direct path full load. Specify this value to use the direct path protocol in the Oracle Call Interface (OCI). By using this OCI protocol, you can bulk-load Oracle target tables during a full load.
UseLogminerReader
— (Boolean
)Set this attribute to Y to capture change data using the Oracle LogMiner utility (the default). Set this attribute to N if you want to access the redo logs as a binary file. When you set
UseLogminerReader
to N, also setUseBfile
to Y. For more information on this setting and using Oracle ASM, see Using Oracle LogMiner or DMS Binary Reader for CDC in the DMS User Guide.SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the Oracle endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the Oracle endpoint connection details.SecretsManagerOracleAsmAccessRoleArn
— (String
)Required only if your Oracle endpoint uses Advanced Storage Manager (ASM). The full ARN of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the
SecretsManagerOracleAsmSecret
. ThisSecretsManagerOracleAsmSecret
has the secret value that allows access to the Oracle ASM of the endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerOracleAsmSecretId
. Or you can specify clear-text values forAsmUserName
,AsmPassword
, andAsmServerName
. You can't specify both. For more information on creating thisSecretsManagerOracleAsmSecret
and theSecretsManagerOracleAsmAccessRoleArn
andSecretsManagerOracleAsmSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerOracleAsmSecretId
— (String
)Required only if your Oracle endpoint uses Advanced Storage Manager (ASM). The full ARN, partial ARN, or friendly name of the
SecretsManagerOracleAsmSecret
that contains the Oracle ASM connection details for the Oracle endpoint.
SybaseSettings
— (map
)The settings for the SAP ASE source and target endpoint. For more information, see the
SybaseSettings
structure.DatabaseName
— (String
)Database name for the endpoint.
Password
— (String
)Endpoint connection password.
Port
— (Integer
)Endpoint TCP port.
ServerName
— (String
)Fully qualified domain name of the endpoint.
Username
— (String
)Endpoint connection user name.
SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the SAP ASE endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the SAP SAE endpoint connection details.
MicrosoftSQLServerSettings
— (map
)The settings for the Microsoft SQL Server source and target endpoint. For more information, see the
MicrosoftSQLServerSettings
structure.Port
— (Integer
)Endpoint TCP port.
BcpPacketSize
— (Integer
)The maximum size of the packets (in bytes) used to transfer data using BCP.
DatabaseName
— (String
)Database name for the endpoint.
ControlTablesFileGroup
— (String
)Specifies a file group for the DMS internal tables. When the replication task starts, all the internal DMS control tables (awsdms_ apply_exception, awsdms_apply, awsdms_changes) are created for the specified file group.
Password
— (String
)Endpoint connection password.
QuerySingleAlwaysOnNode
— (Boolean
)Cleans and recreates table metadata information on the replication instance when a mismatch occurs. An example is a situation where running an alter DDL statement on a table might result in different information about the table cached in the replication instance.
ReadBackupOnly
— (Boolean
)When this attribute is set to
Y
, DMS only reads changes from transaction log backups and doesn't read from the active transaction log file during ongoing replication. Setting this parameter toY
enables you to control active transaction log file growth during full load and ongoing replication tasks. However, it can add some source latency to ongoing replication.SafeguardPolicy
— (String
)Use this attribute to minimize the need to access the backup log and enable DMS to prevent truncation using one of the following two methods.
Start transactions in the database: This is the default method. When this method is used, DMS prevents TLOG truncation by mimicking a transaction in the database. As long as such a transaction is open, changes that appear after the transaction started aren't truncated. If you need Microsoft Replication to be enabled in your database, then you must choose this method.
Exclusively use sp_repldone within a single task: When this method is used, DMS reads the changes and then uses sp_repldone to mark the TLOG transactions as ready for truncation. Although this method doesn't involve any transactional activities, it can only be used when Microsoft Replication isn't running. Also, when using this method, only one DMS task can access the database at any given time. Therefore, if you need to run parallel DMS tasks against the same database, use the default method.
Possible values include:"rely-on-sql-server-replication-agent"
"exclusive-automatic-truncation"
"shared-automatic-truncation"
ServerName
— (String
)Fully qualified domain name of the endpoint.
Username
— (String
)Endpoint connection user name.
UseBcpFullLoad
— (Boolean
)Use this to attribute to transfer data for full-load operations using BCP. When the target table contains an identity column that does not exist in the source table, you must disable the use BCP for loading table option.
UseThirdPartyBackupDevice
— (Boolean
)When this attribute is set to
Y
, DMS processes third-party transaction log backups if they are created in native format.SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the SQL Server endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the SQL Server endpoint connection details.
IBMDb2Settings
— (map
)The settings for the IBM Db2 LUW source endpoint. For more information, see the
IBMDb2Settings
structure.DatabaseName
— (String
)Database name for the endpoint.
Password
— (String
)Endpoint connection password.
Port
— (Integer
)Endpoint TCP port. The default value is 50000.
ServerName
— (String
)Fully qualified domain name of the endpoint.
SetDataCaptureChanges
— (Boolean
)Enables ongoing replication (CDC) as a BOOLEAN value. The default is true.
CurrentLsn
— (String
)For ongoing replication (CDC), use CurrentLSN to specify a log sequence number (LSN) where you want the replication to start.
MaxKBytesPerRead
— (Integer
)Maximum number of bytes per read, as a NUMBER value. The default is 64 KB.
Username
— (String
)Endpoint connection user name.
SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the Db2 LUW endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the Db2 LUW endpoint connection details.
DocDbSettings
— (map
)Provides information that defines a DocumentDB endpoint.
Username
— (String
)The user name you use to access the DocumentDB source endpoint.
Password
— (String
)The password for the user account you use to access the DocumentDB source endpoint.
ServerName
— (String
)The name of the server on the DocumentDB source endpoint.
Port
— (Integer
)The port value for the DocumentDB source endpoint.
DatabaseName
— (String
)The database name on the DocumentDB source endpoint.
NestingLevel
— (String
)Specifies either document or table mode.
Default value is
Possible values include:"none"
. Specify"none"
to use document mode. Specify"one"
to use table mode."none"
"one"
ExtractDocId
— (Boolean
)Specifies the document ID. Use this setting when
NestingLevel
is set to"none"
.Default value is
"false"
.DocsToInvestigate
— (Integer
)Indicates the number of documents to preview to determine the document organization. Use this setting when
NestingLevel
is set to"one"
.Must be a positive value greater than
0
. Default value is1000
.KmsKeyId
— (String
)The KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the
KmsKeyId
parameter, then DMS uses your default encryption key. KMS creates the default encryption key for your Amazon Web Services account. Your Amazon Web Services account has a different default encryption key for each Amazon Web Services Region.SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the DocumentDB endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the DocumentDB endpoint connection details.
RedisSettings
— (map
)The settings for the Redis target endpoint. For more information, see the
RedisSettings
structure.ServerName
— required — (String
)Fully qualified domain name of the endpoint.
Port
— required — (Integer
)Transmission Control Protocol (TCP) port for the endpoint.
SslSecurityProtocol
— (String
)The connection to a Redis target endpoint using Transport Layer Security (TLS). Valid values include
plaintext
andssl-encryption
. The default isssl-encryption
. Thessl-encryption
option makes an encrypted connection. Optionally, you can identify an Amazon Resource Name (ARN) for an SSL certificate authority (CA) using theSslCaCertificateArn
setting. If an ARN isn't given for a CA, DMS uses the Amazon root CA.The
Possible values include:plaintext
option doesn't provide Transport Layer Security (TLS) encryption for traffic between endpoint and database."plaintext"
"ssl-encryption"
AuthType
— (String
)The type of authentication to perform when connecting to a Redis target. Options include
Possible values include:none
,auth-token
, andauth-role
. Theauth-token
option requires anAuthPassword
value to be provided. Theauth-role
option requiresAuthUserName
andAuthPassword
values to be provided."none"
"auth-role"
"auth-token"
AuthUserName
— (String
)The user name provided with the
auth-role
option of theAuthType
setting for a Redis target endpoint.AuthPassword
— (String
)The password provided with the
auth-role
andauth-token
options of theAuthType
setting for a Redis target endpoint.SslCaCertificateArn
— (String
)The Amazon Resource Name (ARN) for the certificate authority (CA) that DMS uses to connect to your Redis target endpoint.
-
(AWS.Response)
—
Returns:
Waiter Resource States:
describeEndpointSettings(params = {}, callback) ⇒ AWS.Request
Returns information about the possible endpoint settings available when you create an endpoint for a specific database engine.
Service Reference:
Examples:
Calling the describeEndpointSettings operation
var params = { EngineName: 'STRING_VALUE', /* required */ Marker: 'STRING_VALUE', MaxRecords: 'NUMBER_VALUE' }; dms.describeEndpointSettings(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
EngineName
— (String
)The databse engine used for your source or target endpoint.
MaxRecords
— (Integer
)The maximum number of records to include in the response. If more records exist than the specified
MaxRecords
value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.EndpointSettings
— (Array<map>
)Descriptions of the endpoint settings available for your source or target database engine.
Name
— (String
)The name that you want to give the endpoint settings.
Type
— (String
)The type of endpoint. Valid values are
Possible values include:source
andtarget
."string"
"boolean"
"integer"
"enum"
EnumValues
— (Array<String>
)Enumerated values to use for this endpoint.
Sensitive
— (Boolean
)A value that marks this endpoint setting as sensitive.
Units
— (String
)The unit of measure for this endpoint setting.
Applicability
— (String
)The relevance or validity of an endpoint setting for an engine name and its endpoint type.
IntValueMin
— (Integer
)The minimum value of an endpoint setting that is of type
int
.IntValueMax
— (Integer
)The maximum value of an endpoint setting that is of type
int
.DefaultValue
— (String
)The default value of the endpoint setting if no value is specified using
CreateEndpoint
orModifyEndpoint
.
-
(AWS.Response)
—
Returns:
describeEndpointTypes(params = {}, callback) ⇒ AWS.Request
Returns information about the type of endpoints available.
Service Reference:
Examples:
Describe endpoint types
/* Returns information about the type of endpoints available. */ var params = { Filters: [ { Name: "string", Values: [ "string", "string" ] } ], Marker: "", MaxRecords: 123 }; dms.describeEndpointTypes(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response /* data = { Marker: "", SupportedEndpointTypes: [ ] } */ });
Calling the describeEndpointTypes operation
var params = { Filters: [ { Name: 'STRING_VALUE', /* required */ Values: [ /* required */ 'STRING_VALUE', /* more items */ ] }, /* more items */ ], Marker: 'STRING_VALUE', MaxRecords: 'NUMBER_VALUE' }; dms.describeEndpointTypes(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
Filters
— (Array<map>
)Filters applied to the endpoint types.
Valid filter names: engine-name | endpoint-type
Name
— required — (String
)The name of the filter as specified for a
Describe*
or similar operation.Values
— required — (Array<String>
)The filter value, which can specify one or more values used to narrow the returned results.
MaxRecords
— (Integer
)The maximum number of records to include in the response. If more records exist than the specified
MaxRecords
value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.Default: 100
Constraints: Minimum 20, maximum 100.
Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.SupportedEndpointTypes
— (Array<map>
)The types of endpoints that are supported.
EngineName
— (String
)The database engine name. Valid values, depending on the EndpointType, include
"mysql"
,"oracle"
,"postgres"
,"mariadb"
,"aurora"
,"aurora-postgresql"
,"redshift"
,"s3"
,"db2"
,"azuredb"
,"sybase"
,"dynamodb"
,"mongodb"
,"kinesis"
,"kafka"
,"elasticsearch"
,"documentdb"
,"sqlserver"
, and"neptune"
.SupportsCDC
— (Boolean
)Indicates if change data capture (CDC) is supported.
EndpointType
— (String
)The type of endpoint. Valid values are
Possible values include:source
andtarget
."source"
"target"
ReplicationInstanceEngineMinimumVersion
— (String
)The earliest DMS engine version that supports this endpoint engine. Note that endpoint engines released with DMS versions earlier than 3.1.1 do not return a value for this parameter.
EngineDisplayName
— (String
)The expanded name for the engine name. For example, if the
EngineName
parameter is "aurora," this value would be "Amazon Aurora MySQL."
-
(AWS.Response)
—
Returns:
describeEventCategories(params = {}, callback) ⇒ AWS.Request
Lists categories for all event source types, or, if specified, for a specified source type. You can see a list of the event categories and source types in Working with Events and Notifications in the Database Migration Service User Guide.
Service Reference:
Examples:
Calling the describeEventCategories operation
var params = { Filters: [ { Name: 'STRING_VALUE', /* required */ Values: [ /* required */ 'STRING_VALUE', /* more items */ ] }, /* more items */ ], SourceType: 'STRING_VALUE' }; dms.describeEventCategories(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
SourceType
— (String
)The type of DMS resource that generates events.
Valid values: replication-instance | replication-task
Filters
— (Array<map>
)Filters applied to the event categories.
Name
— required — (String
)The name of the filter as specified for a
Describe*
or similar operation.Values
— required — (Array<String>
)The filter value, which can specify one or more values used to narrow the returned results.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:EventCategoryGroupList
— (Array<map>
)A list of event categories.
SourceType
— (String
)The type of DMS resource that generates events.
Valid values: replication-instance | replication-server | security-group | replication-task
EventCategories
— (Array<String>
)A list of event categories from a source type that you've chosen.
-
(AWS.Response)
—
Returns:
describeEvents(params = {}, callback) ⇒ AWS.Request
Lists events for a given source identifier and source type. You can also specify a start and end time. For more information on DMS events, see Working with Events and Notifications in the Database Migration Service User Guide.
Service Reference:
Examples:
Calling the describeEvents operation
var params = { Duration: 'NUMBER_VALUE', EndTime: new Date || 'Wed Dec 31 1969 16:00:00 GMT-0800 (PST)' || 123456789, EventCategories: [ 'STRING_VALUE', /* more items */ ], Filters: [ { Name: 'STRING_VALUE', /* required */ Values: [ /* required */ 'STRING_VALUE', /* more items */ ] }, /* more items */ ], Marker: 'STRING_VALUE', MaxRecords: 'NUMBER_VALUE', SourceIdentifier: 'STRING_VALUE', SourceType: replication-instance, StartTime: new Date || 'Wed Dec 31 1969 16:00:00 GMT-0800 (PST)' || 123456789 }; dms.describeEvents(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
SourceIdentifier
— (String
)The identifier of an event source.
SourceType
— (String
)The type of DMS resource that generates events.
Valid values: replication-instance | replication-task
Possible values include:"replication-instance"
StartTime
— (Date
)The start time for the events to be listed.
EndTime
— (Date
)The end time for the events to be listed.
Duration
— (Integer
)The duration of the events to be listed.
EventCategories
— (Array<String>
)A list of event categories for the source type that you've chosen.
Filters
— (Array<map>
)Filters applied to events.
Name
— required — (String
)The name of the filter as specified for a
Describe*
or similar operation.Values
— required — (Array<String>
)The filter value, which can specify one or more values used to narrow the returned results.
MaxRecords
— (Integer
)The maximum number of records to include in the response. If more records exist than the specified
MaxRecords
value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.Default: 100
Constraints: Minimum 20, maximum 100.
Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.Events
— (Array<map>
)The events described.
SourceIdentifier
— (String
)The identifier of an event source.
SourceType
— (String
)The type of DMS resource that generates events.
Valid values: replication-instance | endpoint | replication-task
Possible values include:"replication-instance"
Message
— (String
)The event message.
EventCategories
— (Array<String>
)The event categories available for the specified source type.
Date
— (Date
)The date of the event.
-
(AWS.Response)
—
Returns:
describeEventSubscriptions(params = {}, callback) ⇒ AWS.Request
Lists all the event subscriptions for a customer account. The description of a subscription includes
SubscriptionName
,SNSTopicARN
,CustomerID
,SourceType
,SourceID
,CreationTime
, andStatus
.If you specify
SubscriptionName
, this action lists the description for that subscription.Service Reference:
Examples:
Calling the describeEventSubscriptions operation
var params = { Filters: [ { Name: 'STRING_VALUE', /* required */ Values: [ /* required */ 'STRING_VALUE', /* more items */ ] }, /* more items */ ], Marker: 'STRING_VALUE', MaxRecords: 'NUMBER_VALUE', SubscriptionName: 'STRING_VALUE' }; dms.describeEventSubscriptions(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
SubscriptionName
— (String
)The name of the DMS event subscription to be described.
Filters
— (Array<map>
)Filters applied to event subscriptions.
Name
— required — (String
)The name of the filter as specified for a
Describe*
or similar operation.Values
— required — (Array<String>
)The filter value, which can specify one or more values used to narrow the returned results.
MaxRecords
— (Integer
)The maximum number of records to include in the response. If more records exist than the specified
MaxRecords
value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.Default: 100
Constraints: Minimum 20, maximum 100.
Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.EventSubscriptionsList
— (Array<map>
)A list of event subscriptions.
CustomerAwsId
— (String
)The Amazon Web Services customer account associated with the DMS event notification subscription.
CustSubscriptionId
— (String
)The DMS event notification subscription Id.
SnsTopicArn
— (String
)The topic ARN of the DMS event notification subscription.
Status
— (String
)The status of the DMS event notification subscription.
Constraints:
Can be one of the following: creating | modifying | deleting | active | no-permission | topic-not-exist
The status "no-permission" indicates that DMS no longer has permission to post to the SNS topic. The status "topic-not-exist" indicates that the topic was deleted after the subscription was created.
SubscriptionCreationTime
— (String
)The time the DMS event notification subscription was created.
SourceType
— (String
)The type of DMS resource that generates events.
Valid values: replication-instance | replication-server | security-group | replication-task
SourceIdsList
— (Array<String>
)A list of source Ids for the event subscription.
EventCategoriesList
— (Array<String>
)A lists of event categories.
Enabled
— (Boolean
)Boolean value that indicates if the event subscription is enabled.
-
(AWS.Response)
—
Returns:
describeOrderableReplicationInstances(params = {}, callback) ⇒ AWS.Request
Returns information about the replication instance types that can be created in the specified region.
Service Reference:
Examples:
Describe orderable replication instances
/* Returns information about the replication instance types that can be created in the specified region. */ var params = { Marker: "", MaxRecords: 123 }; dms.describeOrderableReplicationInstances(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response /* data = { Marker: "", OrderableReplicationInstances: [ ] } */ });
Calling the describeOrderableReplicationInstances operation
var params = { Marker: 'STRING_VALUE', MaxRecords: 'NUMBER_VALUE' }; dms.describeOrderableReplicationInstances(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
MaxRecords
— (Integer
)The maximum number of records to include in the response. If more records exist than the specified
MaxRecords
value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.Default: 100
Constraints: Minimum 20, maximum 100.
Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:OrderableReplicationInstances
— (Array<map>
)The order-able replication instances available.
EngineVersion
— (String
)The version of the replication engine.
ReplicationInstanceClass
— (String
)The compute and memory capacity of the replication instance as defined for the specified replication instance class. For example to specify the instance class dms.c4.large, set this parameter to
"dms.c4.large"
.For more information on the settings and capacities for the available replication instance classes, see Selecting the right DMS replication instance for your migration.
StorageType
— (String
)The type of storage used by the replication instance.
MinAllocatedStorage
— (Integer
)The minimum amount of storage (in gigabytes) that can be allocated for the replication instance.
MaxAllocatedStorage
— (Integer
)The minimum amount of storage (in gigabytes) that can be allocated for the replication instance.
DefaultAllocatedStorage
— (Integer
)The default amount of storage (in gigabytes) that is allocated for the replication instance.
IncludedAllocatedStorage
— (Integer
)The amount of storage (in gigabytes) that is allocated for the replication instance.
AvailabilityZones
— (Array<String>
)List of Availability Zones for this replication instance.
ReleaseStatus
— (String
)The value returned when the specified
EngineVersion
of the replication instance is in Beta or test mode. This indicates some features might not work as expected.Note: DMS supports thePossible values include:ReleaseStatus
parameter in versions 3.1.4 and later."beta"
Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.
-
(AWS.Response)
—
Returns:
describePendingMaintenanceActions(params = {}, callback) ⇒ AWS.Request
For internal use only
Service Reference:
Examples:
Calling the describePendingMaintenanceActions operation
var params = { Filters: [ { Name: 'STRING_VALUE', /* required */ Values: [ /* required */ 'STRING_VALUE', /* more items */ ] }, /* more items */ ], Marker: 'STRING_VALUE', MaxRecords: 'NUMBER_VALUE', ReplicationInstanceArn: 'STRING_VALUE' }; dms.describePendingMaintenanceActions(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
ReplicationInstanceArn
— (String
)The Amazon Resource Name (ARN) of the replication instance.
Filters
— (Array<map>
)Name
— required — (String
)The name of the filter as specified for a
Describe*
or similar operation.Values
— required — (Array<String>
)The filter value, which can specify one or more values used to narrow the returned results.
Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.MaxRecords
— (Integer
)The maximum number of records to include in the response. If more records exist than the specified
MaxRecords
value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.Default: 100
Constraints: Minimum 20, maximum 100.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:PendingMaintenanceActions
— (Array<map>
)The pending maintenance action.
ResourceIdentifier
— (String
)The Amazon Resource Name (ARN) of the DMS resource that the pending maintenance action applies to. For information about creating an ARN, see Constructing an Amazon Resource Name (ARN) for DMS in the DMS documentation.
PendingMaintenanceActionDetails
— (Array<map>
)Detailed information about the pending maintenance action.
Action
— (String
)The type of pending maintenance action that is available for the resource.
AutoAppliedAfterDate
— (Date
)The date of the maintenance window when the action is to be applied. The maintenance action is applied to the resource during its first maintenance window after this date. If this date is specified, any
next-maintenance
opt-in requests are ignored.ForcedApplyDate
— (Date
)The date when the maintenance action will be automatically applied. The maintenance action is applied to the resource on this date regardless of the maintenance window for the resource. If this date is specified, any
immediate
opt-in requests are ignored.OptInStatus
— (String
)The type of opt-in request that has been received for the resource.
CurrentApplyDate
— (Date
)The effective date when the pending maintenance action will be applied to the resource. This date takes into account opt-in requests received from the
ApplyPendingMaintenanceAction
API operation, and also theAutoAppliedAfterDate
andForcedApplyDate
parameter values. This value is blank if an opt-in request has not been received and nothing has been specified forAutoAppliedAfterDate
orForcedApplyDate
.Description
— (String
)A description providing more detail about the maintenance action.
Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.
-
(AWS.Response)
—
Returns:
describeRefreshSchemasStatus(params = {}, callback) ⇒ AWS.Request
Returns the status of the RefreshSchemas operation.
Service Reference:
Examples:
Describe refresh schema status
/* Returns the status of the refresh-schemas operation. */ var params = { EndpointArn: "" }; dms.describeRefreshSchemasStatus(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response /* data = { RefreshSchemasStatus: { } } */ });
Calling the describeRefreshSchemasStatus operation
var params = { EndpointArn: 'STRING_VALUE' /* required */ }; dms.describeRefreshSchemasStatus(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
EndpointArn
— (String
)The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:RefreshSchemasStatus
— (map
)The status of the schema.
EndpointArn
— (String
)The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
ReplicationInstanceArn
— (String
)The Amazon Resource Name (ARN) of the replication instance.
Status
— (String
)The status of the schema.
Possible values include:"successful"
"failed"
"refreshing"
LastRefreshDate
— (Date
)The date the schema was last refreshed.
LastFailureMessage
— (String
)The last failure message for the schema.
-
(AWS.Response)
—
Returns:
describeReplicationInstances(params = {}, callback) ⇒ AWS.Request
Returns information about replication instances for your account in the current region.
Service Reference:
Examples:
Describe replication instances
/* Returns the status of the refresh-schemas operation. */ var params = { Filters: [ { Name: "string", Values: [ "string", "string" ] } ], Marker: "", MaxRecords: 123 }; dms.describeReplicationInstances(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response /* data = { Marker: "", ReplicationInstances: [ ] } */ });
Calling the describeReplicationInstances operation
var params = { Filters: [ { Name: 'STRING_VALUE', /* required */ Values: [ /* required */ 'STRING_VALUE', /* more items */ ] }, /* more items */ ], Marker: 'STRING_VALUE', MaxRecords: 'NUMBER_VALUE' }; dms.describeReplicationInstances(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
Filters
— (Array<map>
)Filters applied to replication instances.
Valid filter names: replication-instance-arn | replication-instance-id | replication-instance-class | engine-version
Name
— required — (String
)The name of the filter as specified for a
Describe*
or similar operation.Values
— required — (Array<String>
)The filter value, which can specify one or more values used to narrow the returned results.
MaxRecords
— (Integer
)The maximum number of records to include in the response. If more records exist than the specified
MaxRecords
value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.Default: 100
Constraints: Minimum 20, maximum 100.
Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.ReplicationInstances
— (Array<map>
)The replication instances described.
ReplicationInstanceIdentifier
— (String
)The replication instance identifier is a required parameter. This parameter is stored as a lowercase string.
Constraints:
-
Must contain 1-63 alphanumeric characters or hyphens.
-
First character must be a letter.
-
Cannot end with a hyphen or contain two consecutive hyphens.
Example:
myrepinstance
-
ReplicationInstanceClass
— (String
)The compute and memory capacity of the replication instance as defined for the specified replication instance class. It is a required parameter, although a default value is pre-selected in the DMS console.
For more information on the settings and capacities for the available replication instance classes, see Selecting the right DMS replication instance for your migration.
ReplicationInstanceStatus
— (String
)The status of the replication instance. The possible return values include:
-
"available"
-
"creating"
-
"deleted"
-
"deleting"
-
"failed"
-
"modifying"
-
"upgrading"
-
"rebooting"
-
"resetting-master-credentials"
-
"storage-full"
-
"incompatible-credentials"
-
"incompatible-network"
-
"maintenance"
-
AllocatedStorage
— (Integer
)The amount of storage (in gigabytes) that is allocated for the replication instance.
InstanceCreateTime
— (Date
)The time the replication instance was created.
VpcSecurityGroups
— (Array<map>
)The VPC security group for the instance.
VpcSecurityGroupId
— (String
)The VPC security group ID.
Status
— (String
)The status of the VPC security group.
AvailabilityZone
— (String
)The Availability Zone for the instance.
ReplicationSubnetGroup
— (map
)The subnet group for the replication instance.
ReplicationSubnetGroupIdentifier
— (String
)The identifier of the replication instance subnet group.
ReplicationSubnetGroupDescription
— (String
)A description for the replication subnet group.
VpcId
— (String
)The ID of the VPC.
SubnetGroupStatus
— (String
)The status of the subnet group.
Subnets
— (Array<map>
)The subnets that are in the subnet group.
SubnetIdentifier
— (String
)The subnet identifier.
SubnetAvailabilityZone
— (map
)The Availability Zone of the subnet.
Name
— (String
)The name of the Availability Zone.
SubnetStatus
— (String
)The status of the subnet.
PreferredMaintenanceWindow
— (String
)The maintenance window times for the replication instance. Any pending upgrades to the replication instance are performed during this time.
PendingModifiedValues
— (map
)The pending modification values.
ReplicationInstanceClass
— (String
)The compute and memory capacity of the replication instance as defined for the specified replication instance class.
For more information on the settings and capacities for the available replication instance classes, see Selecting the right DMS replication instance for your migration.
AllocatedStorage
— (Integer
)The amount of storage (in gigabytes) that is allocated for the replication instance.
MultiAZ
— (Boolean
)Specifies whether the replication instance is a Multi-AZ deployment. You can't set the
AvailabilityZone
parameter if the Multi-AZ parameter is set totrue
.EngineVersion
— (String
)The engine version number of the replication instance.
MultiAZ
— (Boolean
)Specifies whether the replication instance is a Multi-AZ deployment. You can't set the
AvailabilityZone
parameter if the Multi-AZ parameter is set totrue
.EngineVersion
— (String
)The engine version number of the replication instance.
If an engine version number is not specified when a replication instance is created, the default is the latest engine version available.
When modifying a major engine version of an instance, also set
AllowMajorVersionUpgrade
totrue
.AutoMinorVersionUpgrade
— (Boolean
)Boolean value indicating if minor version upgrades will be automatically applied to the instance.
KmsKeyId
— (String
)An KMS key identifier that is used to encrypt the data on the replication instance.
If you don't specify a value for the
KmsKeyId
parameter, then DMS uses your default encryption key.KMS creates the default encryption key for your Amazon Web Services account. Your Amazon Web Services account has a different default encryption key for each Amazon Web Services Region.
ReplicationInstanceArn
— (String
)The Amazon Resource Name (ARN) of the replication instance.
ReplicationInstancePublicIpAddress
— (String
)The public IP address of the replication instance.
ReplicationInstancePrivateIpAddress
— (String
)The private IP address of the replication instance.
ReplicationInstancePublicIpAddresses
— (Array<String>
)One or more public IP addresses for the replication instance.
ReplicationInstancePrivateIpAddresses
— (Array<String>
)One or more private IP addresses for the replication instance.
PubliclyAccessible
— (Boolean
)Specifies the accessibility options for the replication instance. A value of
true
represents an instance with a public IP address. A value offalse
represents an instance with a private IP address. The default value istrue
.SecondaryAvailabilityZone
— (String
)The Availability Zone of the standby replication instance in a Multi-AZ deployment.
FreeUntil
— (Date
)The expiration date of the free replication instance that is part of the Free DMS program.
DnsNameServers
— (String
)The DNS name servers supported for the replication instance to access your on-premise source or target database.
-
(AWS.Response)
—
Returns:
Waiter Resource States:
describeReplicationInstanceTaskLogs(params = {}, callback) ⇒ AWS.Request
Returns information about the task logs for the specified task.
Service Reference:
Examples:
Calling the describeReplicationInstanceTaskLogs operation
var params = { ReplicationInstanceArn: 'STRING_VALUE', /* required */ Marker: 'STRING_VALUE', MaxRecords: 'NUMBER_VALUE' }; dms.describeReplicationInstanceTaskLogs(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
ReplicationInstanceArn
— (String
)The Amazon Resource Name (ARN) of the replication instance.
MaxRecords
— (Integer
)The maximum number of records to include in the response. If more records exist than the specified
MaxRecords
value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.Default: 100
Constraints: Minimum 20, maximum 100.
Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:ReplicationInstanceArn
— (String
)The Amazon Resource Name (ARN) of the replication instance.
ReplicationInstanceTaskLogs
— (Array<map>
)An array of replication task log metadata. Each member of the array contains the replication task name, ARN, and task log size (in bytes).
ReplicationTaskName
— (String
)The name of the replication task.
ReplicationTaskArn
— (String
)The Amazon Resource Name (ARN) of the replication task.
ReplicationInstanceTaskLogSize
— (Integer
)The size, in bytes, of the replication task log.
Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.
-
(AWS.Response)
—
Returns:
describeReplicationSubnetGroups(params = {}, callback) ⇒ AWS.Request
Returns information about the replication subnet groups.
Service Reference:
Examples:
Describe replication subnet groups
/* Returns information about the replication subnet groups. */ var params = { Filters: [ { Name: "string", Values: [ "string", "string" ] } ], Marker: "", MaxRecords: 123 }; dms.describeReplicationSubnetGroups(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response /* data = { Marker: "", ReplicationSubnetGroups: [ ] } */ });
Calling the describeReplicationSubnetGroups operation
var params = { Filters: [ { Name: 'STRING_VALUE', /* required */ Values: [ /* required */ 'STRING_VALUE', /* more items */ ] }, /* more items */ ], Marker: 'STRING_VALUE', MaxRecords: 'NUMBER_VALUE' }; dms.describeReplicationSubnetGroups(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
Filters
— (Array<map>
)Filters applied to replication subnet groups.
Valid filter names: replication-subnet-group-id
Name
— required — (String
)The name of the filter as specified for a
Describe*
or similar operation.Values
— required — (Array<String>
)The filter value, which can specify one or more values used to narrow the returned results.
MaxRecords
— (Integer
)The maximum number of records to include in the response. If more records exist than the specified
MaxRecords
value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.Default: 100
Constraints: Minimum 20, maximum 100.
Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.ReplicationSubnetGroups
— (Array<map>
)A description of the replication subnet groups.
ReplicationSubnetGroupIdentifier
— (String
)The identifier of the replication instance subnet group.
ReplicationSubnetGroupDescription
— (String
)A description for the replication subnet group.
VpcId
— (String
)The ID of the VPC.
SubnetGroupStatus
— (String
)The status of the subnet group.
Subnets
— (Array<map>
)The subnets that are in the subnet group.
SubnetIdentifier
— (String
)The subnet identifier.
SubnetAvailabilityZone
— (map
)The Availability Zone of the subnet.
Name
— (String
)The name of the Availability Zone.
SubnetStatus
— (String
)The status of the subnet.
-
(AWS.Response)
—
Returns:
describeReplicationTaskAssessmentResults(params = {}, callback) ⇒ AWS.Request
Returns the task assessment results from the Amazon S3 bucket that DMS creates in your Amazon Web Services account. This action always returns the latest results.
For more information about DMS task assessments, see Creating a task assessment report in the Database Migration Service User Guide.
Service Reference:
Examples:
Calling the describeReplicationTaskAssessmentResults operation
var params = { Marker: 'STRING_VALUE', MaxRecords: 'NUMBER_VALUE', ReplicationTaskArn: 'STRING_VALUE' }; dms.describeReplicationTaskAssessmentResults(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
ReplicationTaskArn
— (String
)The Amazon Resource Name (ARN) string that uniquely identifies the task. When this input parameter is specified, the API returns only one result and ignore the values of the
MaxRecords
andMarker
parameters.MaxRecords
— (Integer
)The maximum number of records to include in the response. If more records exist than the specified
MaxRecords
value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.Default: 100
Constraints: Minimum 20, maximum 100.
Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.BucketName
— (String
)- The Amazon S3 bucket where the task assessment report is located.
ReplicationTaskAssessmentResults
— (Array<map>
)The task assessment report.
ReplicationTaskIdentifier
— (String
)The replication task identifier of the task on which the task assessment was run.
ReplicationTaskArn
— (String
)The Amazon Resource Name (ARN) of the replication task.
ReplicationTaskLastAssessmentDate
— (Date
)The date the task assessment was completed.
AssessmentStatus
— (String
)The status of the task assessment.
AssessmentResultsFile
— (String
)The file containing the results of the task assessment.
AssessmentResults
— (String
)The task assessment results in JSON format.
The response object only contains this field if you provide DescribeReplicationTaskAssessmentResultsMessage$ReplicationTaskArn in the request.
S3ObjectUrl
— (String
)The URL of the S3 object containing the task assessment results.
The response object only contains this field if you provide DescribeReplicationTaskAssessmentResultsMessage$ReplicationTaskArn in the request.
-
(AWS.Response)
—
Returns:
describeReplicationTaskAssessmentRuns(params = {}, callback) ⇒ AWS.Request
Returns a paginated list of premigration assessment runs based on filter settings.
These filter settings can specify a combination of premigration assessment runs, migration tasks, replication instances, and assessment run status values.
Note: This operation doesn't return information about individual assessments. For this information, see theDescribeReplicationTaskIndividualAssessments
operation.Service Reference:
Examples:
Calling the describeReplicationTaskAssessmentRuns operation
var params = { Filters: [ { Name: 'STRING_VALUE', /* required */ Values: [ /* required */ 'STRING_VALUE', /* more items */ ] }, /* more items */ ], Marker: 'STRING_VALUE', MaxRecords: 'NUMBER_VALUE' }; dms.describeReplicationTaskAssessmentRuns(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
Filters
— (Array<map>
)Filters applied to the premigration assessment runs described in the form of key-value pairs.
Valid filter names:
replication-task-assessment-run-arn
,replication-task-arn
,replication-instance-arn
,status
Name
— required — (String
)The name of the filter as specified for a
Describe*
or similar operation.Values
— required — (Array<String>
)The filter value, which can specify one or more values used to narrow the returned results.
MaxRecords
— (Integer
)The maximum number of records to include in the response. If more records exist than the specified
MaxRecords
value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:Marker
— (String
)A pagination token returned for you to pass to a subsequent request. If you pass this token as the
Marker
value in a subsequent request, the response includes only records beyond the marker, up to the value specified in the request byMaxRecords
.ReplicationTaskAssessmentRuns
— (Array<map>
)One or more premigration assessment runs as specified by
Filters
.ReplicationTaskAssessmentRunArn
— (String
)Amazon Resource Name (ARN) of this assessment run.
ReplicationTaskArn
— (String
)ARN of the migration task associated with this premigration assessment run.
Status
— (String
)Assessment run status.
This status can have one of the following values:
-
"cancelling"
– The assessment run was canceled by theCancelReplicationTaskAssessmentRun
operation. -
"deleting"
– The assessment run was deleted by theDeleteReplicationTaskAssessmentRun
operation. -
"failed"
– At least one individual assessment completed with afailed
status. -
"error-provisioning"
– An internal error occurred while resources were provisioned (duringprovisioning
status). -
"error-executing"
– An internal error occurred while individual assessments ran (duringrunning
status). -
"invalid state"
– The assessment run is in an unknown state. -
"passed"
– All individual assessments have completed, and none has afailed
status. -
"provisioning"
– Resources required to run individual assessments are being provisioned. -
"running"
– Individual assessments are being run. -
"starting"
– The assessment run is starting, but resources are not yet being provisioned for individual assessments.
-
ReplicationTaskAssessmentRunCreationDate
— (Date
)Date on which the assessment run was created using the
StartReplicationTaskAssessmentRun
operation.AssessmentProgress
— (map
)Indication of the completion progress for the individual assessments specified to run.
IndividualAssessmentCount
— (Integer
)The number of individual assessments that are specified to run.
IndividualAssessmentCompletedCount
— (Integer
)The number of individual assessments that have completed, successfully or not.
LastFailureMessage
— (String
)Last message generated by an individual assessment failure.
ServiceAccessRoleArn
— (String
)ARN of the service role used to start the assessment run using the
StartReplicationTaskAssessmentRun
operation. The role must allow theiam:PassRole
action.ResultLocationBucket
— (String
)Amazon S3 bucket where DMS stores the results of this assessment run.
ResultLocationFolder
— (String
)Folder in an Amazon S3 bucket where DMS stores the results of this assessment run.
ResultEncryptionMode
— (String
)Encryption mode used to encrypt the assessment run results.
ResultKmsKeyArn
— (String
)ARN of the KMS encryption key used to encrypt the assessment run results.
AssessmentRunName
— (String
)Unique name of the assessment run.
-
(AWS.Response)
—
Returns:
describeReplicationTaskIndividualAssessments(params = {}, callback) ⇒ AWS.Request
Returns a paginated list of individual assessments based on filter settings.
These filter settings can specify a combination of premigration assessment runs, migration tasks, and assessment status values.
Service Reference:
Examples:
Calling the describeReplicationTaskIndividualAssessments operation
var params = { Filters: [ { Name: 'STRING_VALUE', /* required */ Values: [ /* required */ 'STRING_VALUE', /* more items */ ] }, /* more items */ ], Marker: 'STRING_VALUE', MaxRecords: 'NUMBER_VALUE' }; dms.describeReplicationTaskIndividualAssessments(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
Filters
— (Array<map>
)Filters applied to the individual assessments described in the form of key-value pairs.
Valid filter names:
replication-task-assessment-run-arn
,replication-task-arn
,status
Name
— required — (String
)The name of the filter as specified for a
Describe*
or similar operation.Values
— required — (Array<String>
)The filter value, which can specify one or more values used to narrow the returned results.
MaxRecords
— (Integer
)The maximum number of records to include in the response. If more records exist than the specified
MaxRecords
value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:Marker
— (String
)A pagination token returned for you to pass to a subsequent request. If you pass this token as the
Marker
value in a subsequent request, the response includes only records beyond the marker, up to the value specified in the request byMaxRecords
.ReplicationTaskIndividualAssessments
— (Array<map>
)One or more individual assessments as specified by
Filters
.ReplicationTaskIndividualAssessmentArn
— (String
)Amazon Resource Name (ARN) of this individual assessment.
ReplicationTaskAssessmentRunArn
— (String
)ARN of the premigration assessment run that is created to run this individual assessment.
IndividualAssessmentName
— (String
)Name of this individual assessment.
Status
— (String
)Individual assessment status.
This status can have one of the following values:
-
"cancelled"
-
"error"
-
"failed"
-
"passed"
-
"pending"
-
"running"
-
ReplicationTaskIndividualAssessmentStartDate
— (Date
)Date when this individual assessment was started as part of running the
StartReplicationTaskAssessmentRun
operation.
-
(AWS.Response)
—
Returns:
describeReplicationTasks(params = {}, callback) ⇒ AWS.Request
Returns information about replication tasks for your account in the current region.
Service Reference:
Examples:
Describe replication tasks
/* Returns information about replication tasks for your account in the current region. */ var params = { Filters: [ { Name: "string", Values: [ "string", "string" ] } ], Marker: "", MaxRecords: 123 }; dms.describeReplicationTasks(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response /* data = { Marker: "", ReplicationTasks: [ ] } */ });
Calling the describeReplicationTasks operation
var params = { Filters: [ { Name: 'STRING_VALUE', /* required */ Values: [ /* required */ 'STRING_VALUE', /* more items */ ] }, /* more items */ ], Marker: 'STRING_VALUE', MaxRecords: 'NUMBER_VALUE', WithoutSettings: true || false }; dms.describeReplicationTasks(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
Filters
— (Array<map>
)Filters applied to replication tasks.
Valid filter names: replication-task-arn | replication-task-id | migration-type | endpoint-arn | replication-instance-arn
Name
— required — (String
)The name of the filter as specified for a
Describe*
or similar operation.Values
— required — (Array<String>
)The filter value, which can specify one or more values used to narrow the returned results.
MaxRecords
— (Integer
)The maximum number of records to include in the response. If more records exist than the specified
MaxRecords
value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.Default: 100
Constraints: Minimum 20, maximum 100.
Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.WithoutSettings
— (Boolean
)An option to set to avoid returning information about settings. Use this to reduce overhead when setting information is too large. To use this option, choose
true
; otherwise, choosefalse
(the default).
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.ReplicationTasks
— (Array<map>
)A description of the replication tasks.
ReplicationTaskIdentifier
— (String
)The user-assigned replication task identifier or name.
Constraints:
-
Must contain 1-255 alphanumeric characters or hyphens.
-
First character must be a letter.
-
Cannot end with a hyphen or contain two consecutive hyphens.
-
SourceEndpointArn
— (String
)The Amazon Resource Name (ARN) that uniquely identifies the endpoint.
TargetEndpointArn
— (String
)The ARN that uniquely identifies the endpoint.
ReplicationInstanceArn
— (String
)The ARN of the replication instance.
MigrationType
— (String
)The type of migration.
Possible values include:"full-load"
"cdc"
"full-load-and-cdc"
TableMappings
— (String
)Table mappings specified in the task.
ReplicationTaskSettings
— (String
)The settings for the replication task.
Status
— (String
)The status of the replication task. This response parameter can return one of the following values:
-
"moving"
– The task is being moved in response to running theMoveReplicationTask
operation. -
"creating"
– The task is being created in response to running theCreateReplicationTask
operation. -
"deleting"
– The task is being deleted in response to running theDeleteReplicationTask
operation. -
"failed"
– The task failed to successfully complete the database migration in response to running theStartReplicationTask
operation. -
"failed-move"
– The task failed to move in response to running theMoveReplicationTask
operation. -
"modifying"
– The task definition is being modified in response to running theModifyReplicationTask
operation. -
"ready"
– The task is in aready
state where it can respond to other task operations, such asStartReplicationTask
orDeleteReplicationTask
. -
"running"
– The task is performing a database migration in response to running theStartReplicationTask
operation. -
"starting"
– The task is preparing to perform a database migration in response to running theStartReplicationTask
operation. -
"stopped"
– The task has stopped in response to running theStopReplicationTask
operation. -
"stopping"
– The task is preparing to stop in response to running theStopReplicationTask
operation. -
"testing"
– The database migration specified for this task is being tested in response to running either theStartReplicationTaskAssessmentRun
or theStartReplicationTaskAssessment
operation.Note:StartReplicationTaskAssessmentRun
is an improved premigration task assessment operation. TheStartReplicationTaskAssessment
operation assesses data type compatibility only between the source and target database of a given migration task. In contrast,StartReplicationTaskAssessmentRun
enables you to specify a variety of premigration task assessments in addition to data type compatibility. These assessments include ones for the validity of primary key definitions and likely issues with database migration performance, among others.
-
LastFailureMessage
— (String
)The last error (failure) message generated for the replication task.
StopReason
— (String
)The reason the replication task was stopped. This response parameter can return one of the following values:
-
"STOP_REASON_FULL_LOAD_COMPLETED"
– Full-load migration completed. -
"STOP_REASON_CACHED_CHANGES_APPLIED"
– Change data capture (CDC) load completed. -
"STOP_REASON_CACHED_CHANGES_NOT_APPLIED"
– In a full-load and CDC migration, the full load stopped as specified before starting the CDC migration. -
"STOP_REASON_SERVER_TIME"
– The migration stopped at the specified server time.
-
ReplicationTaskCreationDate
— (Date
)The date the replication task was created.
ReplicationTaskStartDate
— (Date
)The date the replication task is scheduled to start.
CdcStartPosition
— (String
)Indicates when you want a change data capture (CDC) operation to start. Use either
CdcStartPosition
orCdcStartTime
to specify when you want the CDC operation to start. Specifying both values results in an error.The value can be in date, checkpoint, or LSN/SCN format.
Date Example: --cdc-start-position “2018-03-08T12:12:12”
Checkpoint Example: --cdc-start-position "checkpoint:V1#27#mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:1876#0#0#*#0#93"
LSN Example: --cdc-start-position “mysql-bin-changelog.000024:373”
CdcStopPosition
— (String
)Indicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.
Server time example: --cdc-stop-position “server_time:2018-02-09T12:12:12”
Commit time example: --cdc-stop-position “commit_time: 2018-02-09T12:12:12 “
RecoveryCheckpoint
— (String
)Indicates the last checkpoint that occurred during a change data capture (CDC) operation. You can provide this value to the
CdcStartPosition
parameter to start a CDC operation that begins at that checkpoint.ReplicationTaskArn
— (String
)The Amazon Resource Name (ARN) of the replication task.
ReplicationTaskStats
— (map
)The statistics for the task, including elapsed time, tables loaded, and table errors.
FullLoadProgressPercent
— (Integer
)The percent complete for the full load migration task.
ElapsedTimeMillis
— (Integer
)The elapsed time of the task, in milliseconds.
TablesLoaded
— (Integer
)The number of tables loaded for this task.
TablesLoading
— (Integer
)The number of tables currently loading for this task.
TablesQueued
— (Integer
)The number of tables queued for this task.
TablesErrored
— (Integer
)The number of errors that have occurred during this task.
FreshStartDate
— (Date
)The date the replication task was started either with a fresh start or a target reload.
StartDate
— (Date
)The date the replication task was started either with a fresh start or a resume. For more information, see StartReplicationTaskType.
StopDate
— (Date
)The date the replication task was stopped.
FullLoadStartDate
— (Date
)The date the replication task full load was started.
FullLoadFinishDate
— (Date
)The date the replication task full load was completed.
TaskData
— (String
)Supplemental information that the task requires to migrate the data for certain source and target endpoints. For more information, see Specifying Supplemental Data for Task Settings in the Database Migration Service User Guide.
TargetReplicationInstanceArn
— (String
)The ARN of the replication instance to which this task is moved in response to running the
MoveReplicationTask
operation. Otherwise, this response parameter isn't a member of theReplicationTask
object.
-
(AWS.Response)
—
Returns:
Waiter Resource States:
describeSchemas(params = {}, callback) ⇒ AWS.Request
Examples:
Describe schemas
/* Returns information about the schema for the specified endpoint. */ var params = { EndpointArn: "", Marker: "", MaxRecords: 123 }; dms.describeSchemas(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response /* data = { Marker: "", Schemas: [ ] } */ });
Calling the describeSchemas operation
var params = { EndpointArn: 'STRING_VALUE', /* required */ Marker: 'STRING_VALUE', MaxRecords: 'NUMBER_VALUE' }; dms.describeSchemas(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
EndpointArn
— (String
)The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
MaxRecords
— (Integer
)The maximum number of records to include in the response. If more records exist than the specified
MaxRecords
value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.Default: 100
Constraints: Minimum 20, maximum 100.
Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.Schemas
— (Array<String>
)The described schema.
-
(AWS.Response)
—
Returns:
describeTableStatistics(params = {}, callback) ⇒ AWS.Request
Returns table statistics on the database migration task, including table name, rows inserted, rows updated, and rows deleted.
Note that the "last updated" column the DMS console only indicates the time that DMS last updated the table statistics record for a table. It does not indicate the time of the last update to the table.
Service Reference:
Examples:
Describe table statistics
/* Returns table statistics on the database migration task, including table name, rows inserted, rows updated, and rows deleted. */ var params = { Marker: "", MaxRecords: 123, ReplicationTaskArn: "" }; dms.describeTableStatistics(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response /* data = { Marker: "", ReplicationTaskArn: "", TableStatistics: [ ] } */ });
Calling the describeTableStatistics operation
var params = { ReplicationTaskArn: 'STRING_VALUE', /* required */ Filters: [ { Name: 'STRING_VALUE', /* required */ Values: [ /* required */ 'STRING_VALUE', /* more items */ ] }, /* more items */ ], Marker: 'STRING_VALUE', MaxRecords: 'NUMBER_VALUE' }; dms.describeTableStatistics(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
ReplicationTaskArn
— (String
)The Amazon Resource Name (ARN) of the replication task.
MaxRecords
— (Integer
)The maximum number of records to include in the response. If more records exist than the specified
MaxRecords
value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.Default: 100
Constraints: Minimum 20, maximum 500.
Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.Filters
— (Array<map>
)Filters applied to table statistics.
Valid filter names: schema-name | table-name | table-state
A combination of filters creates an AND condition where each record matches all specified filters.
Name
— required — (String
)The name of the filter as specified for a
Describe*
or similar operation.Values
— required — (Array<String>
)The filter value, which can specify one or more values used to narrow the returned results.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:ReplicationTaskArn
— (String
)The Amazon Resource Name (ARN) of the replication task.
TableStatistics
— (Array<map>
)The table statistics.
SchemaName
— (String
)The schema name.
TableName
— (String
)The name of the table.
Inserts
— (Integer
)The number of insert actions performed on a table.
Deletes
— (Integer
)The number of delete actions performed on a table.
Updates
— (Integer
)The number of update actions performed on a table.
Ddls
— (Integer
)The data definition language (DDL) used to build and modify the structure of your tables.
FullLoadRows
— (Integer
)The number of rows added during the full load operation.
FullLoadCondtnlChkFailedRows
— (Integer
)The number of rows that failed conditional checks during the full load operation (valid only for migrations where DynamoDB is the target).
FullLoadErrorRows
— (Integer
)The number of rows that failed to load during the full load operation (valid only for migrations where DynamoDB is the target).
FullLoadStartTime
— (Date
)The time when the full load operation started.
FullLoadEndTime
— (Date
)The time when the full load operation completed.
FullLoadReloaded
— (Boolean
)A value that indicates if the table was reloaded (
true
) or loaded as part of a new full load operation (false
).LastUpdateTime
— (Date
)The last time a table was updated.
TableState
— (String
)The state of the tables described.
Valid states: Table does not exist | Before load | Full load | Table completed | Table cancelled | Table error | Table all | Table updates | Table is being reloaded
ValidationPendingRecords
— (Integer
)The number of records that have yet to be validated.
ValidationFailedRecords
— (Integer
)The number of records that failed validation.
ValidationSuspendedRecords
— (Integer
)The number of records that couldn't be validated.
ValidationState
— (String
)The validation state of the table.
This parameter can have the following values:
-
Not enabled – Validation isn't enabled for the table in the migration task.
-
Pending records – Some records in the table are waiting for validation.
-
Mismatched records – Some records in the table don't match between the source and target.
-
Suspended records – Some records in the table couldn't be validated.
-
No primary key –The table couldn't be validated because it has no primary key.
-
Table error – The table wasn't validated because it's in an error state and some data wasn't migrated.
-
Validated – All rows in the table are validated. If the table is updated, the status can change from Validated.
-
Error – The table couldn't be validated because of an unexpected error.
-
Pending validation – The table is waiting validation.
-
Preparing table – Preparing the table enabled in the migration task for validation.
-
Pending revalidation – All rows in the table are pending validation after the table was updated.
-
ValidationStateDetails
— (String
)Additional details about the state of validation.
Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.
-
(AWS.Response)
—
Returns:
importCertificate(params = {}, callback) ⇒ AWS.Request
Uploads the specified certificate.
Service Reference:
Examples:
Import certificate
/* Uploads the specified certificate. */ var params = { CertificateIdentifier: "", CertificatePem: "" }; dms.importCertificate(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response /* data = { Certificate: { } } */ });
Calling the importCertificate operation
var params = { CertificateIdentifier: 'STRING_VALUE', /* required */ CertificatePem: 'STRING_VALUE', CertificateWallet: Buffer.from('...') || 'STRING_VALUE' /* Strings will be Base-64 encoded on your behalf */, Tags: [ { Key: 'STRING_VALUE', ResourceArn: 'STRING_VALUE', Value: 'STRING_VALUE' }, /* more items */ ] }; dms.importCertificate(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
CertificateIdentifier
— (String
)A customer-assigned name for the certificate. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens.
CertificatePem
— (String
)The contents of a
.pem
file, which contains an X.509 certificate.CertificateWallet
— (Buffer, Typed Array, Blob, String
)The location of an imported Oracle Wallet certificate for use with SSL. Provide the name of a
.sso
file using thefileb://
prefix. You can't provide the certificate inline.Tags
— (Array<map>
)The tags associated with the certificate.
Key
— (String
)A key is the required name of the tag. The string value can be 1-128 Unicode characters in length and can't be prefixed with "aws:" or "dms:". The string can only contain only the set of Unicode letters, digits, white-space, '', '.', '/', '=', '+', '-' (Java regular expressions: "^([\p
{L}\\p{Z}\\p{N}
.:/=+\-]*)$").Value
— (String
)A value is the optional value of the tag. The string value can be 1-256 Unicode characters in length and can't be prefixed with "aws:" or "dms:". The string can only contain only the set of Unicode letters, digits, white-space, '', '.', '/', '=', '+', '-' (Java regular expressions: "^([\p
{L}\\p{Z}\\p{N}
.:/=+\-]*)$").ResourceArn
— (String
)The Amazon Resource Name (ARN) string that uniquely identifies the resource for which the tag is created.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:Certificate
— (map
)The certificate to be uploaded.
CertificateIdentifier
— (String
)A customer-assigned name for the certificate. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens.
CertificateCreationDate
— (Date
)The date that the certificate was created.
CertificatePem
— (String
)The contents of a
.pem
file, which contains an X.509 certificate.CertificateWallet
— (Buffer, Typed Array, Blob, String
)The location of an imported Oracle Wallet certificate for use with SSL.
CertificateArn
— (String
)The Amazon Resource Name (ARN) for the certificate.
CertificateOwner
— (String
)The owner of the certificate.
ValidFromDate
— (Date
)The beginning date that the certificate is valid.
ValidToDate
— (Date
)The final date that the certificate is valid.
SigningAlgorithm
— (String
)The signing algorithm for the certificate.
KeyLength
— (Integer
)The key length of the cryptographic algorithm being used.
-
(AWS.Response)
—
Returns:
listTagsForResource(params = {}, callback) ⇒ AWS.Request
Lists all metadata tags attached to an DMS resource, including replication instance, endpoint, security group, and migration task. For more information, see
Tag
data type description.Service Reference:
Examples:
List tags for resource
/* Lists all tags for an AWS DMS resource. */ var params = { ResourceArn: "" }; dms.listTagsForResource(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response /* data = { TagList: [ ] } */ });
Calling the listTagsForResource operation
var params = { ResourceArn: 'STRING_VALUE', ResourceArnList: [ 'STRING_VALUE', /* more items */ ] }; dms.listTagsForResource(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
ResourceArn
— (String
)The Amazon Resource Name (ARN) string that uniquely identifies the DMS resource to list tags for. This returns a list of keys (names of tags) created for the resource and their associated tag values.
ResourceArnList
— (Array<String>
)List of ARNs that identify multiple DMS resources that you want to list tags for. This returns a list of keys (tag names) and their associated tag values. It also returns each tag's associated
ResourceArn
value, which is the ARN of the resource for which each listed tag is created.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:TagList
— (Array<map>
)A list of tags for the resource.
Key
— (String
)A key is the required name of the tag. The string value can be 1-128 Unicode characters in length and can't be prefixed with "aws:" or "dms:". The string can only contain only the set of Unicode letters, digits, white-space, '', '.', '/', '=', '+', '-' (Java regular expressions: "^([\p
{L}\\p{Z}\\p{N}
.:/=+\-]*)$").Value
— (String
)A value is the optional value of the tag. The string value can be 1-256 Unicode characters in length and can't be prefixed with "aws:" or "dms:". The string can only contain only the set of Unicode letters, digits, white-space, '', '.', '/', '=', '+', '-' (Java regular expressions: "^([\p
{L}\\p{Z}\\p{N}
.:/=+\-]*)$").ResourceArn
— (String
)The Amazon Resource Name (ARN) string that uniquely identifies the resource for which the tag is created.
-
(AWS.Response)
—
Returns:
modifyEndpoint(params = {}, callback) ⇒ AWS.Request
Modifies the specified endpoint.
Note: For a MySQL source or target endpoint, don't explicitly specify the database using theDatabaseName
request parameter on theModifyEndpoint
API call. SpecifyingDatabaseName
when you modify a MySQL endpoint replicates all the task tables to this single database. For MySQL endpoints, you specify the database only when you specify the schema in the table-mapping rules of the DMS task.Service Reference:
Examples:
Modify endpoint
/* Modifies the specified endpoint. */ var params = { CertificateArn: "", DatabaseName: "", EndpointArn: "", EndpointIdentifier: "", EndpointType: "source", EngineName: "", ExtraConnectionAttributes: "", Password: "", Port: 123, ServerName: "", SslMode: "require", Username: "" }; dms.modifyEndpoint(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response /* data = { Endpoint: { } } */ });
Calling the modifyEndpoint operation
var params = { EndpointArn: 'STRING_VALUE', /* required */ CertificateArn: 'STRING_VALUE', DatabaseName: 'STRING_VALUE', DmsTransferSettings: { BucketName: 'STRING_VALUE', ServiceAccessRoleArn: 'STRING_VALUE' }, DocDbSettings: { DatabaseName: 'STRING_VALUE', DocsToInvestigate: 'NUMBER_VALUE', ExtractDocId: true || false, KmsKeyId: 'STRING_VALUE', NestingLevel: none | one, Password: 'STRING_VALUE', Port: 'NUMBER_VALUE', SecretsManagerAccessRoleArn: 'STRING_VALUE', SecretsManagerSecretId: 'STRING_VALUE', ServerName: 'STRING_VALUE', Username: 'STRING_VALUE' }, DynamoDbSettings: { ServiceAccessRoleArn: 'STRING_VALUE' /* required */ }, ElasticsearchSettings: { EndpointUri: 'STRING_VALUE', /* required */ ServiceAccessRoleArn: 'STRING_VALUE', /* required */ ErrorRetryDuration: 'NUMBER_VALUE', FullLoadErrorPercentage: 'NUMBER_VALUE' }, EndpointIdentifier: 'STRING_VALUE', EndpointType: source | target, EngineName: 'STRING_VALUE', ExactSettings: true || false, ExternalTableDefinition: 'STRING_VALUE', ExtraConnectionAttributes: 'STRING_VALUE', IBMDb2Settings: { CurrentLsn: 'STRING_VALUE', DatabaseName: 'STRING_VALUE', MaxKBytesPerRead: 'NUMBER_VALUE', Password: 'STRING_VALUE', Port: 'NUMBER_VALUE', SecretsManagerAccessRoleArn: 'STRING_VALUE', SecretsManagerSecretId: 'STRING_VALUE', ServerName: 'STRING_VALUE', SetDataCaptureChanges: true || false, Username: 'STRING_VALUE' }, KafkaSettings: { Broker: 'STRING_VALUE', IncludeControlDetails: true || false, IncludeNullAndEmpty: true || false, IncludePartitionValue: true || false, IncludeTableAlterOperations: true || false, IncludeTransactionDetails: true || false, MessageFormat: json | json-unformatted, MessageMaxBytes: 'NUMBER_VALUE', NoHexPrefix: true || false, PartitionIncludeSchemaTable: true || false, SaslPassword: 'STRING_VALUE', SaslUsername: 'STRING_VALUE', SecurityProtocol: plaintext | ssl-authentication | ssl-encryption | sasl-ssl, SslCaCertificateArn: 'STRING_VALUE', SslClientCertificateArn: 'STRING_VALUE', SslClientKeyArn: 'STRING_VALUE', SslClientKeyPassword: 'STRING_VALUE', Topic: 'STRING_VALUE' }, KinesisSettings: { IncludeControlDetails: true || false, IncludeNullAndEmpty: true || false, IncludePartitionValue: true || false, IncludeTableAlterOperations: true || false, IncludeTransactionDetails: true || false, MessageFormat: json | json-unformatted, NoHexPrefix: true || false, PartitionIncludeSchemaTable: true || false, ServiceAccessRoleArn: 'STRING_VALUE', StreamArn: 'STRING_VALUE' }, MicrosoftSQLServerSettings: { BcpPacketSize: 'NUMBER_VALUE', ControlTablesFileGroup: 'STRING_VALUE', DatabaseName: 'STRING_VALUE', Password: 'STRING_VALUE', Port: 'NUMBER_VALUE', QuerySingleAlwaysOnNode: true || false, ReadBackupOnly: true || false, SafeguardPolicy: rely-on-sql-server-replication-agent | exclusive-automatic-truncation | shared-automatic-truncation, SecretsManagerAccessRoleArn: 'STRING_VALUE', SecretsManagerSecretId: 'STRING_VALUE', ServerName: 'STRING_VALUE', UseBcpFullLoad: true || false, UseThirdPartyBackupDevice: true || false, Username: 'STRING_VALUE' }, MongoDbSettings: { AuthMechanism: default | mongodb_cr | scram_sha_1, AuthSource: 'STRING_VALUE', AuthType: no | password, DatabaseName: 'STRING_VALUE', DocsToInvestigate: 'STRING_VALUE', ExtractDocId: 'STRING_VALUE', KmsKeyId: 'STRING_VALUE', NestingLevel: none | one, Password: 'STRING_VALUE', Port: 'NUMBER_VALUE', SecretsManagerAccessRoleArn: 'STRING_VALUE', SecretsManagerSecretId: 'STRING_VALUE', ServerName: 'STRING_VALUE', Username: 'STRING_VALUE' }, MySQLSettings: { AfterConnectScript: 'STRING_VALUE', CleanSourceMetadataOnMismatch: true || false, DatabaseName: 'STRING_VALUE', EventsPollInterval: 'NUMBER_VALUE', MaxFileSize: 'NUMBER_VALUE', ParallelLoadThreads: 'NUMBER_VALUE', Password: 'STRING_VALUE', Port: 'NUMBER_VALUE', SecretsManagerAccessRoleArn: 'STRING_VALUE', SecretsManagerSecretId: 'STRING_VALUE', ServerName: 'STRING_VALUE', ServerTimezone: 'STRING_VALUE', TargetDbType: specific-database | multiple-databases, Username: 'STRING_VALUE' }, NeptuneSettings: { S3BucketFolder: 'STRING_VALUE', /* required */ S3BucketName: 'STRING_VALUE', /* required */ ErrorRetryDuration: 'NUMBER_VALUE', IamAuthEnabled: true || false, MaxFileSize: 'NUMBER_VALUE', MaxRetryCount: 'NUMBER_VALUE', ServiceAccessRoleArn: 'STRING_VALUE' }, OracleSettings: { AccessAlternateDirectly: true || false, AddSupplementalLogging: true || false, AdditionalArchivedLogDestId: 'NUMBER_VALUE', AllowSelectNestedTables: true || false, ArchivedLogDestId: 'NUMBER_VALUE', ArchivedLogsOnly: true || false, AsmPassword: 'STRING_VALUE', AsmServer: 'STRING_VALUE', AsmUser: 'STRING_VALUE', CharLengthSemantics: default | char | byte, DatabaseName: 'STRING_VALUE', DirectPathNoLog: true || false, DirectPathParallelLoad: true || false, EnableHomogenousTablespace: true || false, ExtraArchivedLogDestIds: [ 'NUMBER_VALUE', /* more items */ ], FailTasksOnLobTruncation: true || false, NumberDatatypeScale: 'NUMBER_VALUE', OraclePathPrefix: 'STRING_VALUE', ParallelAsmReadThreads: 'NUMBER_VALUE', Password: 'STRING_VALUE', Port: 'NUMBER_VALUE', ReadAheadBlocks: 'NUMBER_VALUE', ReadTableSpaceName: true || false, ReplacePathPrefix: true || false, RetryInterval: 'NUMBER_VALUE', SecretsManagerAccessRoleArn: 'STRING_VALUE', SecretsManagerOracleAsmAccessRoleArn: 'STRING_VALUE', SecretsManagerOracleAsmSecretId: 'STRING_VALUE', SecretsManagerSecretId: 'STRING_VALUE', SecurityDbEncryption: 'STRING_VALUE', SecurityDbEncryptionName: 'STRING_VALUE', ServerName: 'STRING_VALUE', SpatialDataOptionToGeoJsonFunctionName: 'STRING_VALUE', StandbyDelayTime: 'NUMBER_VALUE', UseAlternateFolderForOnline: true || false, UseBFile: true || false, UseDirectPathFullLoad: true || false, UseLogminerReader: true || false, UsePathPrefix: 'STRING_VALUE', Username: 'STRING_VALUE' }, Password: 'STRING_VALUE', Port: 'NUMBER_VALUE', PostgreSQLSettings: { AfterConnectScript: 'STRING_VALUE', CaptureDdls: true || false, DatabaseName: 'STRING_VALUE', DdlArtifactsSchema: 'STRING_VALUE', ExecuteTimeout: 'NUMBER_VALUE', FailTasksOnLobTruncation: true || false, HeartbeatEnable: true || false, HeartbeatFrequency: 'NUMBER_VALUE', HeartbeatSchema: 'STRING_VALUE', MaxFileSize: 'NUMBER_VALUE', Password: 'STRING_VALUE', PluginName: no-preference | test-decoding | pglogical, Port: 'NUMBER_VALUE', SecretsManagerAccessRoleArn: 'STRING_VALUE', SecretsManagerSecretId: 'STRING_VALUE', ServerName: 'STRING_VALUE', SlotName: 'STRING_VALUE', Username: 'STRING_VALUE' }, RedisSettings: { Port: 'NUMBER_VALUE', /* required */ ServerName: 'STRING_VALUE', /* required */ AuthPassword: 'STRING_VALUE', AuthType: none | auth-role | auth-token, AuthUserName: 'STRING_VALUE', SslCaCertificateArn: 'STRING_VALUE', SslSecurityProtocol: plaintext | ssl-encryption }, RedshiftSettings: { AcceptAnyDate: true || false, AfterConnectScript: 'STRING_VALUE', BucketFolder: 'STRING_VALUE', BucketName: 'STRING_VALUE', CaseSensitiveNames: true || false, CompUpdate: true || false, ConnectionTimeout: 'NUMBER_VALUE', DatabaseName: 'STRING_VALUE', DateFormat: 'STRING_VALUE', EmptyAsNull: true || false, EncryptionMode: sse-s3 | sse-kms, ExplicitIds: true || false, FileTransferUploadStreams: 'NUMBER_VALUE', LoadTimeout: 'NUMBER_VALUE', MaxFileSize: 'NUMBER_VALUE', Password: 'STRING_VALUE', Port: 'NUMBER_VALUE', RemoveQuotes: true || false, ReplaceChars: 'STRING_VALUE', ReplaceInvalidChars: 'STRING_VALUE', SecretsManagerAccessRoleArn: 'STRING_VALUE', SecretsManagerSecretId: 'STRING_VALUE', ServerName: 'STRING_VALUE', ServerSideEncryptionKmsKeyId: 'STRING_VALUE', ServiceAccessRoleArn: 'STRING_VALUE', TimeFormat: 'STRING_VALUE', TrimBlanks: true || false, TruncateColumns: true || false, Username: 'STRING_VALUE', WriteBufferSize: 'NUMBER_VALUE' }, S3Settings: { AddColumnName: true || false, BucketFolder: 'STRING_VALUE', BucketName: 'STRING_VALUE', CannedAclForObjects: none | private | public-read | public-read-write | authenticated-read | aws-exec-read | bucket-owner-read | bucket-owner-full-control, CdcInsertsAndUpdates: true || false, CdcInsertsOnly: true || false, CdcMaxBatchInterval: 'NUMBER_VALUE', CdcMinFileSize: 'NUMBER_VALUE', CdcPath: 'STRING_VALUE', CompressionType: none | gzip, CsvDelimiter: 'STRING_VALUE', CsvNoSupValue: 'STRING_VALUE', CsvNullValue: 'STRING_VALUE', CsvRowDelimiter: 'STRING_VALUE', DataFormat: csv | parquet, DataPageSize: 'NUMBER_VALUE', DatePartitionDelimiter: SLASH | UNDERSCORE | DASH | NONE, DatePartitionEnabled: true || false, DatePartitionSequence: YYYYMMDD | YYYYMMDDHH | YYYYMM | MMYYYYDD | DDMMYYYY, DictPageSizeLimit: 'NUMBER_VALUE', EnableStatistics: true || false, EncodingType: plain | plain-dictionary | rle-dictionary, EncryptionMode: sse-s3 | sse-kms, ExternalTableDefinition: 'STRING_VALUE', IgnoreHeaderRows: 'NUMBER_VALUE', IncludeOpForFullLoad: true || false, MaxFileSize: 'NUMBER_VALUE', ParquetTimestampInMillisecond: true || false, ParquetVersion: parquet-1-0 | parquet-2-0, PreserveTransactions: true || false, Rfc4180: true || false, RowGroupLength: 'NUMBER_VALUE', ServerSideEncryptionKmsKeyId: 'STRING_VALUE', ServiceAccessRoleArn: 'STRING_VALUE', TimestampColumnName: 'STRING_VALUE', UseCsvNoSupValue: true || false }, ServerName: 'STRING_VALUE', ServiceAccessRoleArn: 'STRING_VALUE', SslMode: none | require | verify-ca | verify-full, SybaseSettings: { DatabaseName: 'STRING_VALUE', Password: 'STRING_VALUE', Port: 'NUMBER_VALUE', SecretsManagerAccessRoleArn: 'STRING_VALUE', SecretsManagerSecretId: 'STRING_VALUE', ServerName: 'STRING_VALUE', Username: 'STRING_VALUE' }, Username: 'STRING_VALUE' }; dms.modifyEndpoint(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
EndpointArn
— (String
)The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
EndpointIdentifier
— (String
)The database endpoint identifier. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens.
EndpointType
— (String
)The type of endpoint. Valid values are
Possible values include:source
andtarget
."source"
"target"
EngineName
— (String
)The type of engine for the endpoint. Valid values, depending on the EndpointType, include
"mysql"
,"oracle"
,"postgres"
,"mariadb"
,"aurora"
,"aurora-postgresql"
,"redshift"
,"s3"
,"db2"
,"azuredb"
,"sybase"
,"dynamodb"
,"mongodb"
,"kinesis"
,"kafka"
,"elasticsearch"
,"documentdb"
,"sqlserver"
, and"neptune"
.Username
— (String
)The user name to be used to login to the endpoint database.
Password
— (String
)The password to be used to login to the endpoint database.
ServerName
— (String
)The name of the server where the endpoint database resides.
Port
— (Integer
)The port used by the endpoint database.
DatabaseName
— (String
)The name of the endpoint database. For a MySQL source or target endpoint, do not specify DatabaseName.
ExtraConnectionAttributes
— (String
)Additional attributes associated with the connection. To reset this parameter, pass the empty string ("") as an argument.
CertificateArn
— (String
)The Amazon Resource Name (ARN) of the certificate used for SSL connection.
SslMode
— (String
)The SSL mode used to connect to the endpoint. The default value is
Possible values include:none
."none"
"require"
"verify-ca"
"verify-full"
ServiceAccessRoleArn
— (String
)The Amazon Resource Name (ARN) for the IAM role you want to use to modify the endpoint. The role must allow the
iam:PassRole
action.ExternalTableDefinition
— (String
)The external table definition.
DynamoDbSettings
— (map
)Settings in JSON format for the target Amazon DynamoDB endpoint. For information about other available settings, see Using Object Mapping to Migrate Data to DynamoDB in the Database Migration Service User Guide.
ServiceAccessRoleArn
— required — (String
)The Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the
iam:PassRole
action.
S3Settings
— (map
)Settings in JSON format for the target Amazon S3 endpoint. For more information about the available settings, see Extra Connection Attributes When Using Amazon S3 as a Target for DMS in the Database Migration Service User Guide.
ServiceAccessRoleArn
— (String
)The Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the
iam:PassRole
action. It is a required parameter that enables DMS to write and read objects from an S3 bucket.ExternalTableDefinition
— (String
)Specifies how tables are defined in the S3 source files only.
CsvRowDelimiter
— (String
)The delimiter used to separate rows in the .csv file for both source and target. The default is a carriage return (
\n
).CsvDelimiter
— (String
)The delimiter used to separate columns in the .csv file for both source and target. The default is a comma.
BucketFolder
— (String
)An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path
bucketFolder/schema_name/table_name/
. If this parameter isn't specified, then the path used isschema_name/table_name/
.BucketName
— (String
)The name of the S3 bucket.
CompressionType
— (String
)An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed. This parameter applies to both .csv and .parquet file formats.
Possible values include:"none"
"gzip"
EncryptionMode
— (String
)The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either
SSE_S3
(the default) orSSE_KMS
.Note: For theModifyEndpoint
operation, you can change the existing value of theEncryptionMode
parameter fromSSE_KMS
toSSE_S3
. But you can’t change the existing value fromSSE_S3
toSSE_KMS
.To use
SSE_S3
, you need an Identity and Access Management (IAM) role with permission to allow"arn:aws:s3:::dms-*"
to use the following actions:-
s3:CreateBucket
-
s3:ListBucket
-
s3:DeleteBucket
-
s3:GetBucketLocation
-
s3:GetObject
-
s3:PutObject
-
s3:DeleteObject
-
s3:GetObjectVersion
-
s3:GetBucketPolicy
-
s3:PutBucketPolicy
-
s3:DeleteBucketPolicy
"sse-s3"
"sse-kms"
-
ServerSideEncryptionKmsKeyId
— (String
)If you are using
SSE_KMS
for theEncryptionMode
, provide the KMS key ID. The key that you use needs an attached policy that enables Identity and Access Management (IAM) user permissions and allows use of the key.Here is a CLI example:
aws dms create-endpoint --endpoint-identifier value --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=value,BucketFolder=value,BucketName=value,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=value
DataFormat
— (String
)The format of the data that you want to use for output. You can choose one of the following:
-
csv
: This is a row-based file format with comma-separated values (.csv). -
parquet
: Apache Parquet (.parquet) is a columnar storage file format that features efficient compression and provides faster query response.
"csv"
"parquet"
-
EncodingType
— (String
)The type of encoding you are using:
-
RLE_DICTIONARY
uses a combination of bit-packing and run-length encoding to store repeated values more efficiently. This is the default. -
PLAIN
doesn't use encoding at all. Values are stored as they are. -
PLAIN_DICTIONARY
builds a dictionary of the values encountered in a given column. The dictionary is stored in a dictionary page for each column chunk.
"plain"
"plain-dictionary"
"rle-dictionary"
-
DictPageSizeLimit
— (Integer
)The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of
PLAIN
. This parameter defaults to 1024 * 1024 bytes (1 MiB), the maximum size of a dictionary page before it reverts toPLAIN
encoding. This size is used for .parquet file format only.RowGroupLength
— (Integer
)The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used for .parquet file format only.
If you choose a value larger than the maximum,
RowGroupLength
is set to the max row group length in bytes (64 * 1024 * 1024).DataPageSize
— (Integer
)The size of one data page in bytes. This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is used for .parquet file format only.
ParquetVersion
— (String
)The version of the Apache Parquet format that you want to use:
Possible values include:parquet_1_0
(the default) orparquet_2_0
."parquet-1-0"
"parquet-2-0"
EnableStatistics
— (Boolean
)A value that enables statistics for Parquet pages and row groups. Choose
true
to enable statistics,false
to disable. Statistics includeNULL
,DISTINCT
,MAX
, andMIN
values. This parameter defaults totrue
. This value is used for .parquet file format only.IncludeOpForFullLoad
— (Boolean
)A value that enables a full load to write INSERT operations to the comma-separated value (.csv) output files only to indicate how the rows were added to the source database.
Note: DMS supports theIncludeOpForFullLoad
parameter in versions 3.1.4 and later.For full load, records can only be inserted. By default (the
false
setting), no information is recorded in these output files for a full load to indicate that the rows were inserted at the source database. IfIncludeOpForFullLoad
is set totrue
ory
, the INSERT is recorded as an I annotation in the first field of the .csv file. This allows the format of your target records from a full load to be consistent with the target records from a CDC load.Note: This setting works together with theCdcInsertsOnly
and theCdcInsertsAndUpdates
parameters for output to .csv files only. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..CdcInsertsOnly
— (Boolean
)A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage (.parquet) output files. By default (the
false
setting), the first field in a .csv or .parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was inserted, updated, or deleted at the source database for a CDC load to the target.If
CdcInsertsOnly
is set totrue
ory
, only INSERTs from the source database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded depends on the value ofIncludeOpForFullLoad
. IfIncludeOpForFullLoad
is set totrue
, the first field of every CDC record is set to I to indicate the INSERT operation at the source. IfIncludeOpForFullLoad
is set tofalse
, every CDC record is written without a first field to indicate the INSERT operation at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..Note: DMS supports the interaction described preceding between theCdcInsertsOnly
andIncludeOpForFullLoad
parameters in versions 3.1.4 and later.CdcInsertsOnly
andCdcInsertsAndUpdates
can't both be set totrue
for the same endpoint. Set eitherCdcInsertsOnly
orCdcInsertsAndUpdates
totrue
for the same endpoint, but not both.TimestampColumnName
— (String
)A value that when nonblank causes DMS to add a column with timestamp information to the endpoint data for an Amazon S3 target.
Note: DMS supports theTimestampColumnName
parameter in versions 3.1.4 and later.DMS includes an additional
STRING
column in the .csv or .parquet object files of your migrated data when you setTimestampColumnName
to a nonblank value.For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS.
For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.
The string format for this timestamp column value is
yyyy-MM-dd HH:mm:ss.SSSSSS
. By default, the precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on the commit timestamp supported by DMS for the source database.When the
AddColumnName
parameter is set totrue
, DMS also includes a name for the timestamp column that you set withTimestampColumnName
.ParquetTimestampInMillisecond
— (Boolean
)A value that specifies the precision of any
TIMESTAMP
column values that are written to an Amazon S3 object file in .parquet format.Note: DMS supports theParquetTimestampInMillisecond
parameter in versions 3.1.4 and later.When
ParquetTimestampInMillisecond
is set totrue
ory
, DMS writes allTIMESTAMP
columns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes them with microsecond precision.Currently, Amazon Athena and Glue can handle only millisecond precision for
TIMESTAMP
values. Set this parameter totrue
for S3 endpoint object files that are .parquet formatted only if you plan to query or process the data with Athena or Glue.Note: DMS writes anyTIMESTAMP
column values written to an S3 file in .csv format with microsecond precision. SettingParquetTimestampInMillisecond
has no effect on the string format of the timestamp column value that is inserted by setting theTimestampColumnName
parameter.CdcInsertsAndUpdates
— (Boolean
)A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet (columnar storage) output files. The default setting is
false
, but whenCdcInsertsAndUpdates
is set totrue
ory
, only INSERTs and UPDATEs from the source database are migrated to the .csv or .parquet file.For .csv file format only, how these INSERTs and UPDATEs are recorded depends on the value of the
IncludeOpForFullLoad
parameter. IfIncludeOpForFullLoad
is set totrue
, the first field of every CDC record is set to eitherI
orU
to indicate INSERT and UPDATE operations at the source. But ifIncludeOpForFullLoad
is set tofalse
, CDC records are written without an indication of INSERT or UPDATE operations at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..Note: DMS supports the use of theCdcInsertsAndUpdates
parameter in versions 3.3.1 and later.CdcInsertsOnly
andCdcInsertsAndUpdates
can't both be set totrue
for the same endpoint. Set eitherCdcInsertsOnly
orCdcInsertsAndUpdates
totrue
for the same endpoint, but not both.DatePartitionEnabled
— (Boolean
)When set to
true
, this parameter partitions S3 bucket folders based on transaction commit dates. The default value isfalse
. For more information about date-based folder partitioning, see Using date-based folder partitioning.DatePartitionSequence
— (String
)Identifies the sequence of the date format to use during folder partitioning. The default value is
Possible values include:YYYYMMDD
. Use this parameter whenDatePartitionedEnabled
is set totrue
."YYYYMMDD"
"YYYYMMDDHH"
"YYYYMM"
"MMYYYYDD"
"DDMMYYYY"
DatePartitionDelimiter
— (String
)Specifies a date separating delimiter to use during folder partitioning. The default value is
Possible values include:SLASH
. Use this parameter whenDatePartitionedEnabled
is set totrue
."SLASH"
"UNDERSCORE"
"DASH"
"NONE"
UseCsvNoSupValue
— (Boolean
)This setting applies if the S3 output files during a change data capture (CDC) load are written in .csv format. If set to
true
for columns not included in the supplemental log, DMS uses the value specified byCsvNoSupValue
. If not set or set tofalse
, DMS uses the null value for these columns.Note: This setting is supported in DMS versions 3.4.1 and later.CsvNoSupValue
— (String
)This setting only applies if your Amazon S3 output files during a change data capture (CDC) load are written in .csv format. If
UseCsvNoSupValue
is set to true, specify a string value that you want DMS to use for all columns not included in the supplemental log. If you do not specify a string value, DMS uses the null value for these columns regardless of theUseCsvNoSupValue
setting.Note: This setting is supported in DMS versions 3.4.1 and later.PreserveTransactions
— (Boolean
)If set to
true
, DMS saves the transaction order for a change data capture (CDC) load on the Amazon S3 target specified byCdcPath
. For more information, see Capturing data changes (CDC) including transaction order on the S3 target.Note: This setting is supported in DMS versions 3.4.2 and later.CdcPath
— (String
)Specifies the folder path of CDC files. For an S3 source, this setting is required if a task captures change data; otherwise, it's optional. If
CdcPath
is set, DMS reads CDC files from this path and replicates the data changes to the target endpoint. For an S3 target if you setPreserveTransactions
totrue
, DMS verifies that you have set this parameter to a folder path on your S3 target where DMS can save the transaction order for the CDC load. DMS creates this CDC folder path in either your S3 target working directory or the S3 target location specified byBucketFolder
andBucketName
.For example, if you specify
CdcPath
asMyChangedData
, and you specifyBucketName
asMyTargetBucket
but do not specifyBucketFolder
, DMS creates the CDC folder path following:MyTargetBucket/MyChangedData
.If you specify the same
CdcPath
, and you specifyBucketName
asMyTargetBucket
andBucketFolder
asMyTargetData
, DMS creates the CDC folder path following:MyTargetBucket/MyTargetData/MyChangedData
.For more information on CDC including transaction order on an S3 target, see Capturing data changes (CDC) including transaction order on the S3 target.
Note: This setting is supported in DMS versions 3.4.2 and later.CannedAclForObjects
— (String
)A value that enables DMS to specify a predefined (canned) access control list for objects created in an Amazon S3 bucket as .csv or .parquet files. For more information about Amazon S3 canned ACLs, see Canned ACL in the Amazon S3 Developer Guide.
The default value is NONE. Valid values include NONE, PRIVATE, PUBLIC_READ, PUBLIC_READ_WRITE, AUTHENTICATED_READ, AWS_EXEC_READ, BUCKET_OWNER_READ, and BUCKET_OWNER_FULL_CONTROL.
Possible values include:"none"
"private"
"public-read"
"public-read-write"
"authenticated-read"
"aws-exec-read"
"bucket-owner-read"
"bucket-owner-full-control"
AddColumnName
— (Boolean
)An optional parameter that, when set to
true
ory
, you can use to add column name information to the .csv output file.The default value is
false
. Valid values aretrue
,false
,y
, andn
.CdcMaxBatchInterval
— (Integer
)Maximum length of the interval, defined in seconds, after which to output a file to Amazon S3.
When
CdcMaxBatchInterval
andCdcMinFileSize
are both specified, the file write is triggered by whichever parameter condition is met first within an DMS CloudFormation template.The default value is 60 seconds.
CdcMinFileSize
— (Integer
)Minimum file size, defined in megabytes, to reach for a file output to Amazon S3.
When
CdcMinFileSize
andCdcMaxBatchInterval
are both specified, the file write is triggered by whichever parameter condition is met first within an DMS CloudFormation template.The default value is 32 MB.
CsvNullValue
— (String
)An optional parameter that specifies how DMS treats null values. While handling the null value, you can use this parameter to pass a user-defined string as null when writing to the target. For example, when target columns are not nullable, you can use this option to differentiate between the empty string value and the null value. So, if you set this parameter value to the empty string ("" or ''), DMS treats the empty string as the null value instead of
NULL
.The default value is
NULL
. Valid values include any valid string.IgnoreHeaderRows
— (Integer
)When this value is set to 1, DMS ignores the first row header in a .csv file. A value of 1 turns on the feature; a value of 0 turns off the feature.
The default is 0.
MaxFileSize
— (Integer
)A value that specifies the maximum size (in KB) of any .csv file to be created while migrating to an S3 target during full load.
The default value is 1,048,576 KB (1 GB). Valid values include 1 to 1,048,576.
Rfc4180
— (Boolean
)For an S3 source, when this value is set to
true
ory
, each leading double quotation mark has to be followed by an ending double quotation mark. This formatting complies with RFC 4180. When this value is set tofalse
orn
, string literals are copied to the target as is. In this case, a delimiter (row or column) signals the end of the field. Thus, you can't use a delimiter as part of the string, because it signals the end of the value.For an S3 target, an optional parameter used to set behavior to comply with RFC 4180 for data migrated to Amazon S3 using .csv file format only. When this value is set to
true
ory
using Amazon S3 as a target, if the data has quotation marks or newline characters in it, DMS encloses the entire column with an additional pair of double quotation marks ("). Every quotation mark within the data is repeated twice.The default value is
true
. Valid values includetrue
,false
,y
, andn
.
DmsTransferSettings
— (map
)The settings in JSON format for the DMS transfer type of source endpoint.
Attributes include the following:
-
serviceAccessRoleArn - The Amazon Resource Name (ARN) used by the service access IAM role. The role must allow the
iam:PassRole
action. -
BucketName - The name of the S3 bucket to use.
Shorthand syntax for these settings is as follows:
ServiceAccessRoleArn=string ,BucketName=string
JSON syntax for these settings is as follows:
{ "ServiceAccessRoleArn": "string", "BucketName": "string"}
ServiceAccessRoleArn
— (String
)The Amazon Resource Name (ARN) used by the service access IAM role. The role must allow the
iam:PassRole
action.BucketName
— (String
)The name of the S3 bucket to use.
-
MongoDbSettings
— (map
)Settings in JSON format for the source MongoDB endpoint. For more information about the available settings, see the configuration properties section in Endpoint configuration settings when using MongoDB as a source for Database Migration Service in the Database Migration Service User Guide.
Username
— (String
)The user name you use to access the MongoDB source endpoint.
Password
— (String
)The password for the user account you use to access the MongoDB source endpoint.
ServerName
— (String
)The name of the server on the MongoDB source endpoint.
Port
— (Integer
)The port value for the MongoDB source endpoint.
DatabaseName
— (String
)The database name on the MongoDB source endpoint.
AuthType
— (String
)The authentication type you use to access the MongoDB source endpoint.
When when set to
Possible values include:"no"
, user name and password parameters are not used and can be empty."no"
"password"
AuthMechanism
— (String
)The authentication mechanism you use to access the MongoDB source endpoint.
For the default value, in MongoDB version 2.x,
Possible values include:"default"
is"mongodb_cr"
. For MongoDB version 3.x or later,"default"
is"scram_sha_1"
. This setting isn't used whenAuthType
is set to"no"
."default"
"mongodb_cr"
"scram_sha_1"
NestingLevel
— (String
)Specifies either document or table mode.
Default value is
Possible values include:"none"
. Specify"none"
to use document mode. Specify"one"
to use table mode."none"
"one"
ExtractDocId
— (String
)Specifies the document ID. Use this setting when
NestingLevel
is set to"none"
.Default value is
"false"
.DocsToInvestigate
— (String
)Indicates the number of documents to preview to determine the document organization. Use this setting when
NestingLevel
is set to"one"
.Must be a positive value greater than
0
. Default value is1000
.AuthSource
— (String
)The MongoDB database name. This setting isn't used when
AuthType
is set to"no"
.The default is
"admin"
.KmsKeyId
— (String
)The KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the
KmsKeyId
parameter, then DMS uses your default encryption key. KMS creates the default encryption key for your Amazon Web Services account. Your Amazon Web Services account has a different default encryption key for each Amazon Web Services Region.SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the MongoDB endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the MongoDB endpoint connection details.
KinesisSettings
— (map
)Settings in JSON format for the target endpoint for Amazon Kinesis Data Streams. For more information about the available settings, see Using object mapping to migrate data to a Kinesis data stream in the Database Migration Service User Guide.
StreamArn
— (String
)The Amazon Resource Name (ARN) for the Amazon Kinesis Data Streams endpoint.
MessageFormat
— (String
)The output format for the records created on the endpoint. The message format is
Possible values include:JSON
(default) orJSON_UNFORMATTED
(a single line with no tab)."json"
"json-unformatted"
ServiceAccessRoleArn
— (String
)The Amazon Resource Name (ARN) for the IAM role that DMS uses to write to the Kinesis data stream. The role must allow the
iam:PassRole
action.IncludeTransactionDetails
— (Boolean
)Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for
transaction_id
, previoustransaction_id
, andtransaction_record_id
(the record offset within a transaction). The default isfalse
.IncludePartitionValue
— (Boolean
)Shows the partition value within the Kinesis message output, unless the partition type is
schema-table-type
. The default isfalse
.PartitionIncludeSchemaTable
— (Boolean
)Prefixes schema and table names to partition values, when the partition type is
primary-key-type
. Doing this increases data distribution among Kinesis shards. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same shard, which causes throttling. The default isfalse
.IncludeTableAlterOperations
— (Boolean
)Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table
,drop-table
,add-column
,drop-column
, andrename-column
. The default isfalse
.IncludeControlDetails
— (Boolean
)Shows detailed control information for table definition, column definition, and table and column changes in the Kinesis message output. The default is
false
.IncludeNullAndEmpty
— (Boolean
)Include NULL and empty columns for records migrated to the endpoint. The default is
false
.NoHexPrefix
— (Boolean
)Set this optional parameter to
true
to avoid adding a '0x' prefix to raw data in hexadecimal format. For example, by default, DMS adds a '0x' prefix to the LOB column type in hexadecimal format moving from an Oracle source to an Amazon Kinesis target. Use theNoHexPrefix
endpoint setting to enable migration of RAW data type columns without adding the '0x' prefix.
KafkaSettings
— (map
)Settings in JSON format for the target Apache Kafka endpoint. For more information about the available settings, see Using object mapping to migrate data to a Kafka topic in the Database Migration Service User Guide.
Broker
— (String
)A comma-separated list of one or more broker locations in your Kafka cluster that host your Kafka instance. Specify each broker location in the form
broker-hostname-or-ip:port
. For example,"ec2-12-345-678-901.compute-1.amazonaws.com:2345"
. For more information and examples of specifying a list of broker locations, see Using Apache Kafka as a target for Database Migration Service in the Database Migration Service User Guide.Topic
— (String
)The topic to which you migrate the data. If you don't specify a topic, DMS specifies
"kafka-default-topic"
as the migration topic.MessageFormat
— (String
)The output format for the records created on the endpoint. The message format is
Possible values include:JSON
(default) orJSON_UNFORMATTED
(a single line with no tab)."json"
"json-unformatted"
IncludeTransactionDetails
— (Boolean
)Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for
transaction_id
, previoustransaction_id
, andtransaction_record_id
(the record offset within a transaction). The default isfalse
.IncludePartitionValue
— (Boolean
)Shows the partition value within the Kafka message output unless the partition type is
schema-table-type
. The default isfalse
.PartitionIncludeSchemaTable
— (Boolean
)Prefixes schema and table names to partition values, when the partition type is
primary-key-type
. Doing this increases data distribution among Kafka partitions. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same partition, which causes throttling. The default isfalse
.IncludeTableAlterOperations
— (Boolean
)Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table
,drop-table
,add-column
,drop-column
, andrename-column
. The default isfalse
.IncludeControlDetails
— (Boolean
)Shows detailed control information for table definition, column definition, and table and column changes in the Kafka message output. The default is
false
.MessageMaxBytes
— (Integer
)The maximum size in bytes for records created on the endpoint The default is 1,000,000.
IncludeNullAndEmpty
— (Boolean
)Include NULL and empty columns for records migrated to the endpoint. The default is
false
.SecurityProtocol
— (String
)Set secure connection to a Kafka target endpoint using Transport Layer Security (TLS). Options include
Possible values include:ssl-encryption
,ssl-authentication
, andsasl-ssl
.sasl-ssl
requiresSaslUsername
andSaslPassword
."plaintext"
"ssl-authentication"
"ssl-encryption"
"sasl-ssl"
SslClientCertificateArn
— (String
)The Amazon Resource Name (ARN) of the client certificate used to securely connect to a Kafka target endpoint.
SslClientKeyArn
— (String
)The Amazon Resource Name (ARN) for the client private key used to securely connect to a Kafka target endpoint.
SslClientKeyPassword
— (String
)The password for the client private key used to securely connect to a Kafka target endpoint.
SslCaCertificateArn
— (String
)The Amazon Resource Name (ARN) for the private certificate authority (CA) cert that DMS uses to securely connect to your Kafka target endpoint.
SaslUsername
— (String
)The secure user name you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.
SaslPassword
— (String
)The secure password you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.
NoHexPrefix
— (Boolean
)Set this optional parameter to
true
to avoid adding a '0x' prefix to raw data in hexadecimal format. For example, by default, DMS adds a '0x' prefix to the LOB column type in hexadecimal format moving from an Oracle source to a Kafka target. Use theNoHexPrefix
endpoint setting to enable migration of RAW data type columns without adding the '0x' prefix.
ElasticsearchSettings
— (map
)Settings in JSON format for the target Elasticsearch endpoint. For more information about the available settings, see Extra Connection Attributes When Using Elasticsearch as a Target for DMS in the Database Migration Service User Guide.
ServiceAccessRoleArn
— required — (String
)The Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the
iam:PassRole
action.EndpointUri
— required — (String
)The endpoint for the Elasticsearch cluster. DMS uses HTTPS if a transport protocol (http/https) is not specified.
FullLoadErrorPercentage
— (Integer
)The maximum percentage of records that can fail to be written before a full load operation stops.
To avoid early failure, this counter is only effective after 1000 records are transferred. Elasticsearch also has the concept of error monitoring during the last 10 minutes of an Observation Window. If transfer of all records fail in the last 10 minutes, the full load operation stops.
ErrorRetryDuration
— (Integer
)The maximum number of seconds for which DMS retries failed API requests to the Elasticsearch cluster.
NeptuneSettings
— (map
)Settings in JSON format for the target Amazon Neptune endpoint. For more information about the available settings, see Specifying graph-mapping rules using Gremlin and R2RML for Amazon Neptune as a target in the Database Migration Service User Guide.
ServiceAccessRoleArn
— (String
)The Amazon Resource Name (ARN) of the service role that you created for the Neptune target endpoint. The role must allow the
iam:PassRole
action. For more information, see Creating an IAM Service Role for Accessing Amazon Neptune as a Target in the Database Migration Service User Guide.S3BucketName
— required — (String
)The name of the Amazon S3 bucket where DMS can temporarily store migrated graph data in .csv files before bulk-loading it to the Neptune target database. DMS maps the SQL source data to graph data before storing it in these .csv files.
S3BucketFolder
— required — (String
)A folder path where you want DMS to store migrated graph data in the S3 bucket specified by
S3BucketName
ErrorRetryDuration
— (Integer
)The number of milliseconds for DMS to wait to retry a bulk-load of migrated graph data to the Neptune target database before raising an error. The default is 250.
MaxFileSize
— (Integer
)The maximum size in kilobytes of migrated graph data stored in a .csv file before DMS bulk-loads the data to the Neptune target database. The default is 1,048,576 KB. If the bulk load is successful, DMS clears the bucket, ready to store the next batch of migrated graph data.
MaxRetryCount
— (Integer
)The number of times for DMS to retry a bulk load of migrated graph data to the Neptune target database before raising an error. The default is 5.
IamAuthEnabled
— (Boolean
)If you want Identity and Access Management (IAM) authorization enabled for this endpoint, set this parameter to
true
. Then attach the appropriate IAM policy document to your service role specified byServiceAccessRoleArn
. The default isfalse
.
RedshiftSettings
— (map
)Provides information that defines an Amazon Redshift endpoint.
AcceptAnyDate
— (Boolean
)A value that indicates to allow any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error. You can choose
true
orfalse
(the default).This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data doesn't match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.
AfterConnectScript
— (String
)Code to run after connecting. This parameter should contain the code itself, not the name of a file containing the code.
BucketFolder
— (String
)An S3 folder where the comma-separated-value (.csv) files are stored before being uploaded to the target Redshift cluster.
For full load mode, DMS converts source records into .csv files and loads them to the BucketFolder/TableID path. DMS uses the Redshift
COPY
command to upload the .csv files to the target table. The files are deleted once theCOPY
operation has finished. For more information, see COPY in the Amazon Redshift Database Developer Guide.For change-data-capture (CDC) mode, DMS creates a NetChanges table, and loads the .csv files to this BucketFolder/NetChangesTableID path.
BucketName
— (String
)The name of the intermediate S3 bucket used to store .csv files before uploading data to Redshift.
CaseSensitiveNames
— (Boolean
)If Amazon Redshift is configured to support case sensitive schema names, set
CaseSensitiveNames
totrue
. The default isfalse
.CompUpdate
— (Boolean
)If you set
CompUpdate
totrue
Amazon Redshift applies automatic compression if the table is empty. This applies even if the table columns already have encodings other thanRAW
. If you setCompUpdate
tofalse
, automatic compression is disabled and existing column encodings aren't changed. The default istrue
.ConnectionTimeout
— (Integer
)A value that sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.
DatabaseName
— (String
)The name of the Amazon Redshift data warehouse (service) that you are working with.
DateFormat
— (String
)The date format that you are using. Valid values are
auto
(case-sensitive), your date format string enclosed in quotes, or NULL. If this parameter is left unset (NULL), it defaults to a format of 'YYYY-MM-DD'. Usingauto
recognizes most strings, even some that aren't supported when you use a date format string.If your date and time values use formats different from each other, set this to
auto
.EmptyAsNull
— (Boolean
)A value that specifies whether DMS should migrate empty CHAR and VARCHAR fields as NULL. A value of
true
sets empty CHAR and VARCHAR fields to null. The default isfalse
.EncryptionMode
— (String
)The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either
SSE_S3
(the default) orSSE_KMS
.Note: For theModifyEndpoint
operation, you can change the existing value of theEncryptionMode
parameter fromSSE_KMS
toSSE_S3
. But you can’t change the existing value fromSSE_S3
toSSE_KMS
.To use
Possible values include:SSE_S3
, create an Identity and Access Management (IAM) role with a policy that allows"arn:aws:s3:::*"
to use the following actions:"s3:PutObject", "s3:ListBucket"
"sse-s3"
"sse-kms"
ExplicitIds
— (Boolean
)This setting is only valid for a full-load migration task. Set
ExplicitIds
totrue
to have tables withIDENTITY
columns override their auto-generated values with explicit values loaded from the source data files used to populate the tables. The default isfalse
.FileTransferUploadStreams
— (Integer
)The number of threads used to upload a single file. This parameter accepts a value from 1 through 64. It defaults to 10.
The number of parallel streams used to upload a single .csv file to an S3 bucket using S3 Multipart Upload. For more information, see Multipart upload overview.
FileTransferUploadStreams
accepts a value from 1 through 64. It defaults to 10.LoadTimeout
— (Integer
)The amount of time to wait (in milliseconds) before timing out of operations performed by DMS on a Redshift cluster, such as Redshift COPY, INSERT, DELETE, and UPDATE.
MaxFileSize
— (Integer
)The maximum size (in KB) of any .csv file used to load data on an S3 bucket and transfer data to Amazon Redshift. It defaults to 1048576KB (1 GB).
Password
— (String
)The password for the user named in the
username
property.Port
— (Integer
)The port number for Amazon Redshift. The default value is 5439.
RemoveQuotes
— (Boolean
)A value that specifies to remove surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained. Choose
true
to remove quotation marks. The default isfalse
.ReplaceInvalidChars
— (String
)A list of characters that you want to replace. Use with
ReplaceChars
.ReplaceChars
— (String
)A value that specifies to replaces the invalid characters specified in
ReplaceInvalidChars
, substituting the specified characters instead. The default is"?"
.ServerName
— (String
)The name of the Amazon Redshift cluster you are using.
ServiceAccessRoleArn
— (String
)The Amazon Resource Name (ARN) of the IAM role that has access to the Amazon Redshift service. The role must allow the
iam:PassRole
action.ServerSideEncryptionKmsKeyId
— (String
)The KMS key ID. If you are using
SSE_KMS
for theEncryptionMode
, provide this key ID. The key that you use needs an attached policy that enables IAM user permissions and allows use of the key.TimeFormat
— (String
)The time format that you want to use. Valid values are
auto
(case-sensitive),'timeformat_string'
,'epochsecs'
, or'epochmillisecs'
. It defaults to 10. Usingauto
recognizes most strings, even some that aren't supported when you use a time format string.If your date and time values use formats different from each other, set this parameter to
auto
.TrimBlanks
— (Boolean
)A value that specifies to remove the trailing white space characters from a VARCHAR string. This parameter applies only to columns with a VARCHAR data type. Choose
true
to remove unneeded white space. The default isfalse
.TruncateColumns
— (Boolean
)A value that specifies to truncate data in columns to the appropriate number of characters, so that the data fits in the column. This parameter applies only to columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. Choose
true
to truncate data. The default isfalse
.Username
— (String
)An Amazon Redshift user name for a registered user.
WriteBufferSize
— (Integer
)The size (in KB) of the in-memory file write buffer used when generating .csv files on the local disk at the DMS replication instance. The default value is 1000 (buffer size is 1000KB).
SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the Amazon Redshift endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the Amazon Redshift endpoint connection details.
PostgreSQLSettings
— (map
)Settings in JSON format for the source and target PostgreSQL endpoint. For information about other available settings, see Extra connection attributes when using PostgreSQL as a source for DMS and Extra connection attributes when using PostgreSQL as a target for DMS in the Database Migration Service User Guide.
AfterConnectScript
— (String
)For use with change data capture (CDC) only, this attribute has DMS bypass foreign keys and user triggers to reduce the time it takes to bulk load data.
Example:
afterConnectScript=SET session_replication_role='replica'
CaptureDdls
— (Boolean
)To capture DDL events, DMS creates various artifacts in the PostgreSQL database when the task starts. You can later remove these artifacts.
If this value is set to
N
, you don't have to create tables or triggers on the source database.MaxFileSize
— (Integer
)Specifies the maximum size (in KB) of any .csv file used to transfer data to PostgreSQL.
Example:
maxFileSize=512
DatabaseName
— (String
)Database name for the endpoint.
DdlArtifactsSchema
— (String
)The schema in which the operational DDL database artifacts are created.
Example:
ddlArtifactsSchema=xyzddlschema;
ExecuteTimeout
— (Integer
)Sets the client statement timeout for the PostgreSQL instance, in seconds. The default value is 60 seconds.
Example:
executeTimeout=100;
FailTasksOnLobTruncation
— (Boolean
)When set to
true
, this value causes a task to fail if the actual size of a LOB column is greater than the specifiedLobMaxSize
.If task is set to Limited LOB mode and this option is set to true, the task fails instead of truncating the LOB data.
HeartbeatEnable
— (Boolean
)The write-ahead log (WAL) heartbeat feature mimics a dummy transaction. By doing this, it prevents idle logical replication slots from holding onto old WAL logs, which can result in storage full situations on the source. This heartbeat keeps
restart_lsn
moving and prevents storage full scenarios.HeartbeatSchema
— (String
)Sets the schema in which the heartbeat artifacts are created.
HeartbeatFrequency
— (Integer
)Sets the WAL heartbeat frequency (in minutes).
Password
— (String
)Endpoint connection password.
Port
— (Integer
)Endpoint TCP port.
ServerName
— (String
)Fully qualified domain name of the endpoint.
Username
— (String
)Endpoint connection user name.
SlotName
— (String
)Sets the name of a previously created logical replication slot for a change data capture (CDC) load of the PostgreSQL source instance.
When used with the
CdcStartPosition
request parameter for the DMS API , this attribute also makes it possible to use native CDC start points. DMS verifies that the specified logical replication slot exists before starting the CDC load task. It also verifies that the task was created with a valid setting ofCdcStartPosition
. If the specified slot doesn't exist or the task doesn't have a validCdcStartPosition
setting, DMS raises an error.For more information about setting the
CdcStartPosition
request parameter, see Determining a CDC native start point in the Database Migration Service User Guide. For more information about usingCdcStartPosition
, see CreateReplicationTask, StartReplicationTask, and ModifyReplicationTask.PluginName
— (String
)Specifies the plugin to use to create a replication slot.
Possible values include:"no-preference"
"test-decoding"
"pglogical"
SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the PostgreSQL endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the PostgreSQL endpoint connection details.
MySQLSettings
— (map
)Settings in JSON format for the source and target MySQL endpoint. For information about other available settings, see Extra connection attributes when using MySQL as a source for DMS and Extra connection attributes when using a MySQL-compatible database as a target for DMS in the Database Migration Service User Guide.
AfterConnectScript
— (String
)Specifies a script to run immediately after DMS connects to the endpoint. The migration task continues running regardless if the SQL statement succeeds or fails.
For this parameter, provide the code of the script itself, not the name of a file containing the script.
CleanSourceMetadataOnMismatch
— (Boolean
)Adjusts the behavior of DMS when migrating from an SQL Server source database that is hosted as part of an Always On availability group cluster. If you need DMS to poll all the nodes in the Always On cluster for transaction backups, set this attribute to
false
.DatabaseName
— (String
)Database name for the endpoint. For a MySQL source or target endpoint, don't explicitly specify the database using the
DatabaseName
request parameter on either theCreateEndpoint
orModifyEndpoint
API call. SpecifyingDatabaseName
when you create or modify a MySQL endpoint replicates all the task tables to this single database. For MySQL endpoints, you specify the database only when you specify the schema in the table-mapping rules of the DMS task.EventsPollInterval
— (Integer
)Specifies how often to check the binary log for new changes/events when the database is idle.
Example:
eventsPollInterval=5;
In the example, DMS checks for changes in the binary logs every five seconds.
TargetDbType
— (String
)Specifies where to migrate source tables on the target, either to a single database or multiple databases.
Example:
Possible values include:targetDbType=MULTIPLE_DATABASES
"specific-database"
"multiple-databases"
MaxFileSize
— (Integer
)Specifies the maximum size (in KB) of any .csv file used to transfer data to a MySQL-compatible database.
Example:
maxFileSize=512
ParallelLoadThreads
— (Integer
)Improves performance when loading data into the MySQL-compatible target database. Specifies how many threads to use to load the data into the MySQL-compatible target database. Setting a large number of threads can have an adverse effect on database performance, because a separate connection is required for each thread.
Example:
parallelLoadThreads=1
Password
— (String
)Endpoint connection password.
Port
— (Integer
)Endpoint TCP port.
ServerName
— (String
)Fully qualified domain name of the endpoint.
ServerTimezone
— (String
)Specifies the time zone for the source MySQL database.
Example:
serverTimezone=US/Pacific;
Note: Do not enclose time zones in single quotes.
Username
— (String
)Endpoint connection user name.
SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the MySQL endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the MySQL endpoint connection details.
OracleSettings
— (map
)Settings in JSON format for the source and target Oracle endpoint. For information about other available settings, see Extra connection attributes when using Oracle as a source for DMS and Extra connection attributes when using Oracle as a target for DMS in the Database Migration Service User Guide.
AddSupplementalLogging
— (Boolean
)Set this attribute to set up table-level supplemental logging for the Oracle database. This attribute enables PRIMARY KEY supplemental logging on all tables selected for a migration task.
If you use this option, you still need to enable database-level supplemental logging.
ArchivedLogDestId
— (Integer
)Specifies the ID of the destination for the archived redo logs. This value should be the same as a number in the dest_id column of the v$archived_log view. If you work with an additional redo log destination, use the
AdditionalArchivedLogDestId
option to specify the additional destination ID. Doing this improves performance by ensuring that the correct logs are accessed from the outset.AdditionalArchivedLogDestId
— (Integer
)Set this attribute with
ArchivedLogDestId
in a primary/ standby setup. This attribute is useful in the case of a switchover. In this case, DMS needs to know which destination to get archive redo logs from to read changes. This need arises because the previous primary instance is now a standby instance after switchover.Although DMS supports the use of the Oracle
RESETLOGS
option to open the database, never useRESETLOGS
unless necessary. For additional information aboutRESETLOGS
, see RMAN Data Repair Concepts in the Oracle Database Backup and Recovery User's Guide.ExtraArchivedLogDestIds
— (Array<Integer>
)Specifies the IDs of one more destinations for one or more archived redo logs. These IDs are the values of the
dest_id
column in thev$archived_log
view. Use this setting with thearchivedLogDestId
extra connection attribute in a primary-to-single setup or a primary-to-multiple-standby setup.This setting is useful in a switchover when you use an Oracle Data Guard database as a source. In this case, DMS needs information about what destination to get archive redo logs from to read changes. DMS needs this because after the switchover the previous primary is a standby instance. For example, in a primary-to-single standby setup you might apply the following settings.
archivedLogDestId=1; ExtraArchivedLogDestIds=[2]
In a primary-to-multiple-standby setup, you might apply the following settings.
archivedLogDestId=1; ExtraArchivedLogDestIds=[2,3,4]
Although DMS supports the use of the Oracle
RESETLOGS
option to open the database, never useRESETLOGS
unless it's necessary. For more information aboutRESETLOGS
, see RMAN Data Repair Concepts in the Oracle Database Backup and Recovery User's Guide.AllowSelectNestedTables
— (Boolean
)Set this attribute to
true
to enable replication of Oracle tables containing columns that are nested tables or defined types.ParallelAsmReadThreads
— (Integer
)Set this attribute to change the number of threads that DMS configures to perform a change data capture (CDC) load using Oracle Automatic Storage Management (ASM). You can specify an integer value between 2 (the default) and 8 (the maximum). Use this attribute together with the
readAheadBlocks
attribute.ReadAheadBlocks
— (Integer
)Set this attribute to change the number of read-ahead blocks that DMS configures to perform a change data capture (CDC) load using Oracle Automatic Storage Management (ASM). You can specify an integer value between 1000 (the default) and 200,000 (the maximum).
AccessAlternateDirectly
— (Boolean
)Set this attribute to
false
in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This tells the DMS instance to not access redo logs through any specified path prefix replacement using direct file access.UseAlternateFolderForOnline
— (Boolean
)Set this attribute to
true
in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This tells the DMS instance to use any specified prefix replacement to access all online redo logs.OraclePathPrefix
— (String
)Set this string attribute to the required value in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This value specifies the default Oracle root used to access the redo logs.
UsePathPrefix
— (String
)Set this string attribute to the required value in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This value specifies the path prefix used to replace the default Oracle root to access the redo logs.
ReplacePathPrefix
— (Boolean
)Set this attribute to true in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This setting tells DMS instance to replace the default Oracle root with the specified
usePathPrefix
setting to access the redo logs.EnableHomogenousTablespace
— (Boolean
)Set this attribute to enable homogenous tablespace replication and create existing tables or indexes under the same tablespace on the target.
DirectPathNoLog
— (Boolean
)When set to
true
, this attribute helps to increase the commit rate on the Oracle target database by writing directly to tables and not writing a trail to database logs.ArchivedLogsOnly
— (Boolean
)When this field is set to
Y
, DMS only accesses the archived redo logs. If the archived redo logs are stored on Oracle ASM only, the DMS user account needs to be granted ASM privileges.AsmPassword
— (String
)For an Oracle source endpoint, your Oracle Automatic Storage Management (ASM) password. You can set this value from the
asm_user_password
value. You set this value as part of the comma-separated value that you set to thePassword
request parameter when you create the endpoint to access transaction logs using Binary Reader. For more information, see Configuration for change data capture (CDC) on an Oracle source database.AsmServer
— (String
)For an Oracle source endpoint, your ASM server address. You can set this value from the
asm_server
value. You setasm_server
as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database.AsmUser
— (String
)For an Oracle source endpoint, your ASM user name. You can set this value from the
asm_user
value. You setasm_user
as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database.CharLengthSemantics
— (String
)Specifies whether the length of a character column is in bytes or in characters. To indicate that the character column length is in characters, set this attribute to
CHAR
. Otherwise, the character column length is in bytes.Example:
Possible values include:charLengthSemantics=CHAR;
"default"
"char"
"byte"
DatabaseName
— (String
)Database name for the endpoint.
DirectPathParallelLoad
— (Boolean
)When set to
true
, this attribute specifies a parallel load whenuseDirectPathFullLoad
is set toY
. This attribute also only applies when you use the DMS parallel load feature. Note that the target table cannot have any constraints or indexes.FailTasksOnLobTruncation
— (Boolean
)When set to
true
, this attribute causes a task to fail if the actual size of an LOB column is greater than the specifiedLobMaxSize
.If a task is set to limited LOB mode and this option is set to
true
, the task fails instead of truncating the LOB data.NumberDatatypeScale
— (Integer
)Specifies the number scale. You can select a scale up to 38, or you can select FLOAT. By default, the NUMBER data type is converted to precision 38, scale 10.
Example:
numberDataTypeScale=12
Password
— (String
)Endpoint connection password.
Port
— (Integer
)Endpoint TCP port.
ReadTableSpaceName
— (Boolean
)When set to
true
, this attribute supports tablespace replication.RetryInterval
— (Integer
)Specifies the number of seconds that the system waits before resending a query.
Example:
retryInterval=6;
SecurityDbEncryption
— (String
)For an Oracle source endpoint, the transparent data encryption (TDE) password required by AWM DMS to access Oracle redo logs encrypted by TDE using Binary Reader. It is also the
TDE_Password
part of the comma-separated value you set to thePassword
request parameter when you create the endpoint. TheSecurityDbEncryptian
setting is related to thisSecurityDbEncryptionName
setting. For more information, see Supported encryption methods for using Oracle as a source for DMS in the Database Migration Service User Guide.SecurityDbEncryptionName
— (String
)For an Oracle source endpoint, the name of a key used for the transparent data encryption (TDE) of the columns and tablespaces in an Oracle source database that is encrypted using TDE. The key value is the value of the
SecurityDbEncryption
setting. For more information on setting the key name value ofSecurityDbEncryptionName
, see the information and example for setting thesecurityDbEncryptionName
extra connection attribute in Supported encryption methods for using Oracle as a source for DMS in the Database Migration Service User Guide.ServerName
— (String
)Fully qualified domain name of the endpoint.
SpatialDataOptionToGeoJsonFunctionName
— (String
)Use this attribute to convert
SDO_GEOMETRY
toGEOJSON
format. By default, DMS calls theSDO2GEOJSON
custom function if present and accessible. Or you can create your own custom function that mimics the operation ofSDOGEOJSON
and setSpatialDataOptionToGeoJsonFunctionName
to call it instead.StandbyDelayTime
— (Integer
)Use this attribute to specify a time in minutes for the delay in standby sync. If the source is an Oracle Active Data Guard standby database, use this attribute to specify the time lag between primary and standby databases.
In DMS, you can create an Oracle CDC task that uses an Active Data Guard standby instance as a source for replicating ongoing changes. Doing this eliminates the need to connect to an active database that might be in production.
Username
— (String
)Endpoint connection user name.
UseBFile
— (Boolean
)Set this attribute to Y to capture change data using the Binary Reader utility. Set
UseLogminerReader
to N to set this attribute to Y. To use Binary Reader with Amazon RDS for Oracle as the source, you set additional attributes. For more information about using this setting with Oracle Automatic Storage Management (ASM), see Using Oracle LogMiner or DMS Binary Reader for CDC.UseDirectPathFullLoad
— (Boolean
)Set this attribute to Y to have DMS use a direct path full load. Specify this value to use the direct path protocol in the Oracle Call Interface (OCI). By using this OCI protocol, you can bulk-load Oracle target tables during a full load.
UseLogminerReader
— (Boolean
)Set this attribute to Y to capture change data using the Oracle LogMiner utility (the default). Set this attribute to N if you want to access the redo logs as a binary file. When you set
UseLogminerReader
to N, also setUseBfile
to Y. For more information on this setting and using Oracle ASM, see Using Oracle LogMiner or DMS Binary Reader for CDC in the DMS User Guide.SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the Oracle endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the Oracle endpoint connection details.SecretsManagerOracleAsmAccessRoleArn
— (String
)Required only if your Oracle endpoint uses Advanced Storage Manager (ASM). The full ARN of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the
SecretsManagerOracleAsmSecret
. ThisSecretsManagerOracleAsmSecret
has the secret value that allows access to the Oracle ASM of the endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerOracleAsmSecretId
. Or you can specify clear-text values forAsmUserName
,AsmPassword
, andAsmServerName
. You can't specify both. For more information on creating thisSecretsManagerOracleAsmSecret
and theSecretsManagerOracleAsmAccessRoleArn
andSecretsManagerOracleAsmSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerOracleAsmSecretId
— (String
)Required only if your Oracle endpoint uses Advanced Storage Manager (ASM). The full ARN, partial ARN, or friendly name of the
SecretsManagerOracleAsmSecret
that contains the Oracle ASM connection details for the Oracle endpoint.
SybaseSettings
— (map
)Settings in JSON format for the source and target SAP ASE endpoint. For information about other available settings, see Extra connection attributes when using SAP ASE as a source for DMS and Extra connection attributes when using SAP ASE as a target for DMS in the Database Migration Service User Guide.
DatabaseName
— (String
)Database name for the endpoint.
Password
— (String
)Endpoint connection password.
Port
— (Integer
)Endpoint TCP port.
ServerName
— (String
)Fully qualified domain name of the endpoint.
Username
— (String
)Endpoint connection user name.
SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the SAP ASE endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the SAP SAE endpoint connection details.
MicrosoftSQLServerSettings
— (map
)Settings in JSON format for the source and target Microsoft SQL Server endpoint. For information about other available settings, see Extra connection attributes when using SQL Server as a source for DMS and Extra connection attributes when using SQL Server as a target for DMS in the Database Migration Service User Guide.
Port
— (Integer
)Endpoint TCP port.
BcpPacketSize
— (Integer
)The maximum size of the packets (in bytes) used to transfer data using BCP.
DatabaseName
— (String
)Database name for the endpoint.
ControlTablesFileGroup
— (String
)Specifies a file group for the DMS internal tables. When the replication task starts, all the internal DMS control tables (awsdms_ apply_exception, awsdms_apply, awsdms_changes) are created for the specified file group.
Password
— (String
)Endpoint connection password.
QuerySingleAlwaysOnNode
— (Boolean
)Cleans and recreates table metadata information on the replication instance when a mismatch occurs. An example is a situation where running an alter DDL statement on a table might result in different information about the table cached in the replication instance.
ReadBackupOnly
— (Boolean
)When this attribute is set to
Y
, DMS only reads changes from transaction log backups and doesn't read from the active transaction log file during ongoing replication. Setting this parameter toY
enables you to control active transaction log file growth during full load and ongoing replication tasks. However, it can add some source latency to ongoing replication.SafeguardPolicy
— (String
)Use this attribute to minimize the need to access the backup log and enable DMS to prevent truncation using one of the following two methods.
Start transactions in the database: This is the default method. When this method is used, DMS prevents TLOG truncation by mimicking a transaction in the database. As long as such a transaction is open, changes that appear after the transaction started aren't truncated. If you need Microsoft Replication to be enabled in your database, then you must choose this method.
Exclusively use sp_repldone within a single task: When this method is used, DMS reads the changes and then uses sp_repldone to mark the TLOG transactions as ready for truncation. Although this method doesn't involve any transactional activities, it can only be used when Microsoft Replication isn't running. Also, when using this method, only one DMS task can access the database at any given time. Therefore, if you need to run parallel DMS tasks against the same database, use the default method.
Possible values include:"rely-on-sql-server-replication-agent"
"exclusive-automatic-truncation"
"shared-automatic-truncation"
ServerName
— (String
)Fully qualified domain name of the endpoint.
Username
— (String
)Endpoint connection user name.
UseBcpFullLoad
— (Boolean
)Use this to attribute to transfer data for full-load operations using BCP. When the target table contains an identity column that does not exist in the source table, you must disable the use BCP for loading table option.
UseThirdPartyBackupDevice
— (Boolean
)When this attribute is set to
Y
, DMS processes third-party transaction log backups if they are created in native format.SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the SQL Server endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the SQL Server endpoint connection details.
IBMDb2Settings
— (map
)Settings in JSON format for the source IBM Db2 LUW endpoint. For information about other available settings, see Extra connection attributes when using Db2 LUW as a source for DMS in the Database Migration Service User Guide.
DatabaseName
— (String
)Database name for the endpoint.
Password
— (String
)Endpoint connection password.
Port
— (Integer
)Endpoint TCP port. The default value is 50000.
ServerName
— (String
)Fully qualified domain name of the endpoint.
SetDataCaptureChanges
— (Boolean
)Enables ongoing replication (CDC) as a BOOLEAN value. The default is true.
CurrentLsn
— (String
)For ongoing replication (CDC), use CurrentLSN to specify a log sequence number (LSN) where you want the replication to start.
MaxKBytesPerRead
— (Integer
)Maximum number of bytes per read, as a NUMBER value. The default is 64 KB.
Username
— (String
)Endpoint connection user name.
SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the Db2 LUW endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the Db2 LUW endpoint connection details.
DocDbSettings
— (map
)Settings in JSON format for the source DocumentDB endpoint. For more information about the available settings, see the configuration properties section in Using DocumentDB as a Target for Database Migration Service in the Database Migration Service User Guide.
Username
— (String
)The user name you use to access the DocumentDB source endpoint.
Password
— (String
)The password for the user account you use to access the DocumentDB source endpoint.
ServerName
— (String
)The name of the server on the DocumentDB source endpoint.
Port
— (Integer
)The port value for the DocumentDB source endpoint.
DatabaseName
— (String
)The database name on the DocumentDB source endpoint.
NestingLevel
— (String
)Specifies either document or table mode.
Default value is
Possible values include:"none"
. Specify"none"
to use document mode. Specify"one"
to use table mode."none"
"one"
ExtractDocId
— (Boolean
)Specifies the document ID. Use this setting when
NestingLevel
is set to"none"
.Default value is
"false"
.DocsToInvestigate
— (Integer
)Indicates the number of documents to preview to determine the document organization. Use this setting when
NestingLevel
is set to"one"
.Must be a positive value greater than
0
. Default value is1000
.KmsKeyId
— (String
)The KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the
KmsKeyId
parameter, then DMS uses your default encryption key. KMS creates the default encryption key for your Amazon Web Services account. Your Amazon Web Services account has a different default encryption key for each Amazon Web Services Region.SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the DocumentDB endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the DocumentDB endpoint connection details.
RedisSettings
— (map
)Settings in JSON format for the Redis target endpoint.
ServerName
— required — (String
)Fully qualified domain name of the endpoint.
Port
— required — (Integer
)Transmission Control Protocol (TCP) port for the endpoint.
SslSecurityProtocol
— (String
)The connection to a Redis target endpoint using Transport Layer Security (TLS). Valid values include
plaintext
andssl-encryption
. The default isssl-encryption
. Thessl-encryption
option makes an encrypted connection. Optionally, you can identify an Amazon Resource Name (ARN) for an SSL certificate authority (CA) using theSslCaCertificateArn
setting. If an ARN isn't given for a CA, DMS uses the Amazon root CA.The
Possible values include:plaintext
option doesn't provide Transport Layer Security (TLS) encryption for traffic between endpoint and database."plaintext"
"ssl-encryption"
AuthType
— (String
)The type of authentication to perform when connecting to a Redis target. Options include
Possible values include:none
,auth-token
, andauth-role
. Theauth-token
option requires anAuthPassword
value to be provided. Theauth-role
option requiresAuthUserName
andAuthPassword
values to be provided."none"
"auth-role"
"auth-token"
AuthUserName
— (String
)The user name provided with the
auth-role
option of theAuthType
setting for a Redis target endpoint.AuthPassword
— (String
)The password provided with the
auth-role
andauth-token
options of theAuthType
setting for a Redis target endpoint.SslCaCertificateArn
— (String
)The Amazon Resource Name (ARN) for the certificate authority (CA) that DMS uses to connect to your Redis target endpoint.
ExactSettings
— (Boolean
)If this attribute is Y, the current call to
ModifyEndpoint
replaces all existing endpoint settings with the exact settings that you specify in this call. If this attribute is N, the current call toModifyEndpoint
does two things:-
It replaces any endpoint settings that already exist with new values, for settings with the same names.
-
It creates new endpoint settings that you specify in the call, for settings with different names.
For example, if you call
create-endpoint ... --endpoint-settings '{"a":1}' ...
, the endpoint has the following endpoint settings:'{"a":1}'
. If you then callmodify-endpoint ... --endpoint-settings '{"b":2}' ...
for the same endpoint, the endpoint has the following settings:'{"a":1,"b":2}'
.However, suppose that you follow this with a call to
modify-endpoint ... --endpoint-settings '{"b":2}' --exact-settings ...
for that same endpoint again. Then the endpoint has the following settings:'{"b":2}'
. All existing settings are replaced with the exact settings that you specify.-
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:Endpoint
— (map
)The modified endpoint.
EndpointIdentifier
— (String
)The database endpoint identifier. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens.
EndpointType
— (String
)The type of endpoint. Valid values are
Possible values include:source
andtarget
."source"
"target"
EngineName
— (String
)The database engine name. Valid values, depending on the EndpointType, include
"mysql"
,"oracle"
,"postgres"
,"mariadb"
,"aurora"
,"aurora-postgresql"
,"redshift"
,"s3"
,"db2"
,"azuredb"
,"sybase"
,"dynamodb"
,"mongodb"
,"kinesis"
,"kafka"
,"elasticsearch"
,"documentdb"
,"sqlserver"
, and"neptune"
.EngineDisplayName
— (String
)The expanded name for the engine name. For example, if the
EngineName
parameter is "aurora," this value would be "Amazon Aurora MySQL."Username
— (String
)The user name used to connect to the endpoint.
ServerName
— (String
)The name of the server at the endpoint.
Port
— (Integer
)The port value used to access the endpoint.
DatabaseName
— (String
)The name of the database at the endpoint.
ExtraConnectionAttributes
— (String
)Additional connection attributes used to connect to the endpoint.
Status
— (String
)The status of the endpoint.
KmsKeyId
— (String
)An KMS key identifier that is used to encrypt the connection parameters for the endpoint.
If you don't specify a value for the
KmsKeyId
parameter, then DMS uses your default encryption key.KMS creates the default encryption key for your Amazon Web Services account. Your Amazon Web Services account has a different default encryption key for each Amazon Web Services Region.
EndpointArn
— (String
)The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
CertificateArn
— (String
)The Amazon Resource Name (ARN) used for SSL connection to the endpoint.
SslMode
— (String
)The SSL mode used to connect to the endpoint. The default value is
Possible values include:none
."none"
"require"
"verify-ca"
"verify-full"
ServiceAccessRoleArn
— (String
)The Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the
iam:PassRole
action.ExternalTableDefinition
— (String
)The external table definition.
ExternalId
— (String
)Value returned by a call to CreateEndpoint that can be used for cross-account validation. Use it on a subsequent call to CreateEndpoint to create the endpoint with a cross-account.
DynamoDbSettings
— (map
)The settings for the DynamoDB target endpoint. For more information, see the
DynamoDBSettings
structure.ServiceAccessRoleArn
— required — (String
)The Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the
iam:PassRole
action.
S3Settings
— (map
)The settings for the S3 target endpoint. For more information, see the
S3Settings
structure.ServiceAccessRoleArn
— (String
)The Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the
iam:PassRole
action. It is a required parameter that enables DMS to write and read objects from an S3 bucket.ExternalTableDefinition
— (String
)Specifies how tables are defined in the S3 source files only.
CsvRowDelimiter
— (String
)The delimiter used to separate rows in the .csv file for both source and target. The default is a carriage return (
\n
).CsvDelimiter
— (String
)The delimiter used to separate columns in the .csv file for both source and target. The default is a comma.
BucketFolder
— (String
)An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path
bucketFolder/schema_name/table_name/
. If this parameter isn't specified, then the path used isschema_name/table_name/
.BucketName
— (String
)The name of the S3 bucket.
CompressionType
— (String
)An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed. This parameter applies to both .csv and .parquet file formats.
Possible values include:"none"
"gzip"
EncryptionMode
— (String
)The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either
SSE_S3
(the default) orSSE_KMS
.Note: For theModifyEndpoint
operation, you can change the existing value of theEncryptionMode
parameter fromSSE_KMS
toSSE_S3
. But you can’t change the existing value fromSSE_S3
toSSE_KMS
.To use
SSE_S3
, you need an Identity and Access Management (IAM) role with permission to allow"arn:aws:s3:::dms-*"
to use the following actions:-
s3:CreateBucket
-
s3:ListBucket
-
s3:DeleteBucket
-
s3:GetBucketLocation
-
s3:GetObject
-
s3:PutObject
-
s3:DeleteObject
-
s3:GetObjectVersion
-
s3:GetBucketPolicy
-
s3:PutBucketPolicy
-
s3:DeleteBucketPolicy
"sse-s3"
"sse-kms"
-
ServerSideEncryptionKmsKeyId
— (String
)If you are using
SSE_KMS
for theEncryptionMode
, provide the KMS key ID. The key that you use needs an attached policy that enables Identity and Access Management (IAM) user permissions and allows use of the key.Here is a CLI example:
aws dms create-endpoint --endpoint-identifier value --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=value,BucketFolder=value,BucketName=value,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=value
DataFormat
— (String
)The format of the data that you want to use for output. You can choose one of the following:
-
csv
: This is a row-based file format with comma-separated values (.csv). -
parquet
: Apache Parquet (.parquet) is a columnar storage file format that features efficient compression and provides faster query response.
"csv"
"parquet"
-
EncodingType
— (String
)The type of encoding you are using:
-
RLE_DICTIONARY
uses a combination of bit-packing and run-length encoding to store repeated values more efficiently. This is the default. -
PLAIN
doesn't use encoding at all. Values are stored as they are. -
PLAIN_DICTIONARY
builds a dictionary of the values encountered in a given column. The dictionary is stored in a dictionary page for each column chunk.
"plain"
"plain-dictionary"
"rle-dictionary"
-
DictPageSizeLimit
— (Integer
)The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of
PLAIN
. This parameter defaults to 1024 * 1024 bytes (1 MiB), the maximum size of a dictionary page before it reverts toPLAIN
encoding. This size is used for .parquet file format only.RowGroupLength
— (Integer
)The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used for .parquet file format only.
If you choose a value larger than the maximum,
RowGroupLength
is set to the max row group length in bytes (64 * 1024 * 1024).DataPageSize
— (Integer
)The size of one data page in bytes. This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is used for .parquet file format only.
ParquetVersion
— (String
)The version of the Apache Parquet format that you want to use:
Possible values include:parquet_1_0
(the default) orparquet_2_0
."parquet-1-0"
"parquet-2-0"
EnableStatistics
— (Boolean
)A value that enables statistics for Parquet pages and row groups. Choose
true
to enable statistics,false
to disable. Statistics includeNULL
,DISTINCT
,MAX
, andMIN
values. This parameter defaults totrue
. This value is used for .parquet file format only.IncludeOpForFullLoad
— (Boolean
)A value that enables a full load to write INSERT operations to the comma-separated value (.csv) output files only to indicate how the rows were added to the source database.
Note: DMS supports theIncludeOpForFullLoad
parameter in versions 3.1.4 and later.For full load, records can only be inserted. By default (the
false
setting), no information is recorded in these output files for a full load to indicate that the rows were inserted at the source database. IfIncludeOpForFullLoad
is set totrue
ory
, the INSERT is recorded as an I annotation in the first field of the .csv file. This allows the format of your target records from a full load to be consistent with the target records from a CDC load.Note: This setting works together with theCdcInsertsOnly
and theCdcInsertsAndUpdates
parameters for output to .csv files only. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..CdcInsertsOnly
— (Boolean
)A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage (.parquet) output files. By default (the
false
setting), the first field in a .csv or .parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was inserted, updated, or deleted at the source database for a CDC load to the target.If
CdcInsertsOnly
is set totrue
ory
, only INSERTs from the source database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded depends on the value ofIncludeOpForFullLoad
. IfIncludeOpForFullLoad
is set totrue
, the first field of every CDC record is set to I to indicate the INSERT operation at the source. IfIncludeOpForFullLoad
is set tofalse
, every CDC record is written without a first field to indicate the INSERT operation at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..Note: DMS supports the interaction described preceding between theCdcInsertsOnly
andIncludeOpForFullLoad
parameters in versions 3.1.4 and later.CdcInsertsOnly
andCdcInsertsAndUpdates
can't both be set totrue
for the same endpoint. Set eitherCdcInsertsOnly
orCdcInsertsAndUpdates
totrue
for the same endpoint, but not both.TimestampColumnName
— (String
)A value that when nonblank causes DMS to add a column with timestamp information to the endpoint data for an Amazon S3 target.
Note: DMS supports theTimestampColumnName
parameter in versions 3.1.4 and later.DMS includes an additional
STRING
column in the .csv or .parquet object files of your migrated data when you setTimestampColumnName
to a nonblank value.For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS.
For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.
The string format for this timestamp column value is
yyyy-MM-dd HH:mm:ss.SSSSSS
. By default, the precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on the commit timestamp supported by DMS for the source database.When the
AddColumnName
parameter is set totrue
, DMS also includes a name for the timestamp column that you set withTimestampColumnName
.ParquetTimestampInMillisecond
— (Boolean
)A value that specifies the precision of any
TIMESTAMP
column values that are written to an Amazon S3 object file in .parquet format.Note: DMS supports theParquetTimestampInMillisecond
parameter in versions 3.1.4 and later.When
ParquetTimestampInMillisecond
is set totrue
ory
, DMS writes allTIMESTAMP
columns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes them with microsecond precision.Currently, Amazon Athena and Glue can handle only millisecond precision for
TIMESTAMP
values. Set this parameter totrue
for S3 endpoint object files that are .parquet formatted only if you plan to query or process the data with Athena or Glue.Note: DMS writes anyTIMESTAMP
column values written to an S3 file in .csv format with microsecond precision. SettingParquetTimestampInMillisecond
has no effect on the string format of the timestamp column value that is inserted by setting theTimestampColumnName
parameter.CdcInsertsAndUpdates
— (Boolean
)A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet (columnar storage) output files. The default setting is
false
, but whenCdcInsertsAndUpdates
is set totrue
ory
, only INSERTs and UPDATEs from the source database are migrated to the .csv or .parquet file.For .csv file format only, how these INSERTs and UPDATEs are recorded depends on the value of the
IncludeOpForFullLoad
parameter. IfIncludeOpForFullLoad
is set totrue
, the first field of every CDC record is set to eitherI
orU
to indicate INSERT and UPDATE operations at the source. But ifIncludeOpForFullLoad
is set tofalse
, CDC records are written without an indication of INSERT or UPDATE operations at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..Note: DMS supports the use of theCdcInsertsAndUpdates
parameter in versions 3.3.1 and later.CdcInsertsOnly
andCdcInsertsAndUpdates
can't both be set totrue
for the same endpoint. Set eitherCdcInsertsOnly
orCdcInsertsAndUpdates
totrue
for the same endpoint, but not both.DatePartitionEnabled
— (Boolean
)When set to
true
, this parameter partitions S3 bucket folders based on transaction commit dates. The default value isfalse
. For more information about date-based folder partitioning, see Using date-based folder partitioning.DatePartitionSequence
— (String
)Identifies the sequence of the date format to use during folder partitioning. The default value is
Possible values include:YYYYMMDD
. Use this parameter whenDatePartitionedEnabled
is set totrue
."YYYYMMDD"
"YYYYMMDDHH"
"YYYYMM"
"MMYYYYDD"
"DDMMYYYY"
DatePartitionDelimiter
— (String
)Specifies a date separating delimiter to use during folder partitioning. The default value is
Possible values include:SLASH
. Use this parameter whenDatePartitionedEnabled
is set totrue
."SLASH"
"UNDERSCORE"
"DASH"
"NONE"
UseCsvNoSupValue
— (Boolean
)This setting applies if the S3 output files during a change data capture (CDC) load are written in .csv format. If set to
true
for columns not included in the supplemental log, DMS uses the value specified byCsvNoSupValue
. If not set or set tofalse
, DMS uses the null value for these columns.Note: This setting is supported in DMS versions 3.4.1 and later.CsvNoSupValue
— (String
)This setting only applies if your Amazon S3 output files during a change data capture (CDC) load are written in .csv format. If
UseCsvNoSupValue
is set to true, specify a string value that you want DMS to use for all columns not included in the supplemental log. If you do not specify a string value, DMS uses the null value for these columns regardless of theUseCsvNoSupValue
setting.Note: This setting is supported in DMS versions 3.4.1 and later.PreserveTransactions
— (Boolean
)If set to
true
, DMS saves the transaction order for a change data capture (CDC) load on the Amazon S3 target specified byCdcPath
. For more information, see Capturing data changes (CDC) including transaction order on the S3 target.Note: This setting is supported in DMS versions 3.4.2 and later.CdcPath
— (String
)Specifies the folder path of CDC files. For an S3 source, this setting is required if a task captures change data; otherwise, it's optional. If
CdcPath
is set, DMS reads CDC files from this path and replicates the data changes to the target endpoint. For an S3 target if you setPreserveTransactions
totrue
, DMS verifies that you have set this parameter to a folder path on your S3 target where DMS can save the transaction order for the CDC load. DMS creates this CDC folder path in either your S3 target working directory or the S3 target location specified byBucketFolder
andBucketName
.For example, if you specify
CdcPath
asMyChangedData
, and you specifyBucketName
asMyTargetBucket
but do not specifyBucketFolder
, DMS creates the CDC folder path following:MyTargetBucket/MyChangedData
.If you specify the same
CdcPath
, and you specifyBucketName
asMyTargetBucket
andBucketFolder
asMyTargetData
, DMS creates the CDC folder path following:MyTargetBucket/MyTargetData/MyChangedData
.For more information on CDC including transaction order on an S3 target, see Capturing data changes (CDC) including transaction order on the S3 target.
Note: This setting is supported in DMS versions 3.4.2 and later.CannedAclForObjects
— (String
)A value that enables DMS to specify a predefined (canned) access control list for objects created in an Amazon S3 bucket as .csv or .parquet files. For more information about Amazon S3 canned ACLs, see Canned ACL in the Amazon S3 Developer Guide.
The default value is NONE. Valid values include NONE, PRIVATE, PUBLIC_READ, PUBLIC_READ_WRITE, AUTHENTICATED_READ, AWS_EXEC_READ, BUCKET_OWNER_READ, and BUCKET_OWNER_FULL_CONTROL.
Possible values include:"none"
"private"
"public-read"
"public-read-write"
"authenticated-read"
"aws-exec-read"
"bucket-owner-read"
"bucket-owner-full-control"
AddColumnName
— (Boolean
)An optional parameter that, when set to
true
ory
, you can use to add column name information to the .csv output file.The default value is
false
. Valid values aretrue
,false
,y
, andn
.CdcMaxBatchInterval
— (Integer
)Maximum length of the interval, defined in seconds, after which to output a file to Amazon S3.
When
CdcMaxBatchInterval
andCdcMinFileSize
are both specified, the file write is triggered by whichever parameter condition is met first within an DMS CloudFormation template.The default value is 60 seconds.
CdcMinFileSize
— (Integer
)Minimum file size, defined in megabytes, to reach for a file output to Amazon S3.
When
CdcMinFileSize
andCdcMaxBatchInterval
are both specified, the file write is triggered by whichever parameter condition is met first within an DMS CloudFormation template.The default value is 32 MB.
CsvNullValue
— (String
)An optional parameter that specifies how DMS treats null values. While handling the null value, you can use this parameter to pass a user-defined string as null when writing to the target. For example, when target columns are not nullable, you can use this option to differentiate between the empty string value and the null value. So, if you set this parameter value to the empty string ("" or ''), DMS treats the empty string as the null value instead of
NULL
.The default value is
NULL
. Valid values include any valid string.IgnoreHeaderRows
— (Integer
)When this value is set to 1, DMS ignores the first row header in a .csv file. A value of 1 turns on the feature; a value of 0 turns off the feature.
The default is 0.
MaxFileSize
— (Integer
)A value that specifies the maximum size (in KB) of any .csv file to be created while migrating to an S3 target during full load.
The default value is 1,048,576 KB (1 GB). Valid values include 1 to 1,048,576.
Rfc4180
— (Boolean
)For an S3 source, when this value is set to
true
ory
, each leading double quotation mark has to be followed by an ending double quotation mark. This formatting complies with RFC 4180. When this value is set tofalse
orn
, string literals are copied to the target as is. In this case, a delimiter (row or column) signals the end of the field. Thus, you can't use a delimiter as part of the string, because it signals the end of the value.For an S3 target, an optional parameter used to set behavior to comply with RFC 4180 for data migrated to Amazon S3 using .csv file format only. When this value is set to
true
ory
using Amazon S3 as a target, if the data has quotation marks or newline characters in it, DMS encloses the entire column with an additional pair of double quotation marks ("). Every quotation mark within the data is repeated twice.The default value is
true
. Valid values includetrue
,false
,y
, andn
.
DmsTransferSettings
— (map
)The settings in JSON format for the DMS transfer type of source endpoint.
Possible settings include the following:
-
ServiceAccessRoleArn
- - The Amazon Resource Name (ARN) used by the service access IAM role. The role must allow theiam:PassRole
action. -
BucketName
- The name of the S3 bucket to use.
Shorthand syntax for these settings is as follows:
ServiceAccessRoleArn=string,BucketName=string,
JSON syntax for these settings is as follows:
{ "ServiceAccessRoleArn": "string", "BucketName": "string"}
ServiceAccessRoleArn
— (String
)The Amazon Resource Name (ARN) used by the service access IAM role. The role must allow the
iam:PassRole
action.BucketName
— (String
)The name of the S3 bucket to use.
-
MongoDbSettings
— (map
)The settings for the MongoDB source endpoint. For more information, see the
MongoDbSettings
structure.Username
— (String
)The user name you use to access the MongoDB source endpoint.
Password
— (String
)The password for the user account you use to access the MongoDB source endpoint.
ServerName
— (String
)The name of the server on the MongoDB source endpoint.
Port
— (Integer
)The port value for the MongoDB source endpoint.
DatabaseName
— (String
)The database name on the MongoDB source endpoint.
AuthType
— (String
)The authentication type you use to access the MongoDB source endpoint.
When when set to
Possible values include:"no"
, user name and password parameters are not used and can be empty."no"
"password"
AuthMechanism
— (String
)The authentication mechanism you use to access the MongoDB source endpoint.
For the default value, in MongoDB version 2.x,
Possible values include:"default"
is"mongodb_cr"
. For MongoDB version 3.x or later,"default"
is"scram_sha_1"
. This setting isn't used whenAuthType
is set to"no"
."default"
"mongodb_cr"
"scram_sha_1"
NestingLevel
— (String
)Specifies either document or table mode.
Default value is
Possible values include:"none"
. Specify"none"
to use document mode. Specify"one"
to use table mode."none"
"one"
ExtractDocId
— (String
)Specifies the document ID. Use this setting when
NestingLevel
is set to"none"
.Default value is
"false"
.DocsToInvestigate
— (String
)Indicates the number of documents to preview to determine the document organization. Use this setting when
NestingLevel
is set to"one"
.Must be a positive value greater than
0
. Default value is1000
.AuthSource
— (String
)The MongoDB database name. This setting isn't used when
AuthType
is set to"no"
.The default is
"admin"
.KmsKeyId
— (String
)The KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the
KmsKeyId
parameter, then DMS uses your default encryption key. KMS creates the default encryption key for your Amazon Web Services account. Your Amazon Web Services account has a different default encryption key for each Amazon Web Services Region.SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the MongoDB endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the MongoDB endpoint connection details.
KinesisSettings
— (map
)The settings for the Amazon Kinesis target endpoint. For more information, see the
KinesisSettings
structure.StreamArn
— (String
)The Amazon Resource Name (ARN) for the Amazon Kinesis Data Streams endpoint.
MessageFormat
— (String
)The output format for the records created on the endpoint. The message format is
Possible values include:JSON
(default) orJSON_UNFORMATTED
(a single line with no tab)."json"
"json-unformatted"
ServiceAccessRoleArn
— (String
)The Amazon Resource Name (ARN) for the IAM role that DMS uses to write to the Kinesis data stream. The role must allow the
iam:PassRole
action.IncludeTransactionDetails
— (Boolean
)Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for
transaction_id
, previoustransaction_id
, andtransaction_record_id
(the record offset within a transaction). The default isfalse
.IncludePartitionValue
— (Boolean
)Shows the partition value within the Kinesis message output, unless the partition type is
schema-table-type
. The default isfalse
.PartitionIncludeSchemaTable
— (Boolean
)Prefixes schema and table names to partition values, when the partition type is
primary-key-type
. Doing this increases data distribution among Kinesis shards. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same shard, which causes throttling. The default isfalse
.IncludeTableAlterOperations
— (Boolean
)Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table
,drop-table
,add-column
,drop-column
, andrename-column
. The default isfalse
.IncludeControlDetails
— (Boolean
)Shows detailed control information for table definition, column definition, and table and column changes in the Kinesis message output. The default is
false
.IncludeNullAndEmpty
— (Boolean
)Include NULL and empty columns for records migrated to the endpoint. The default is
false
.NoHexPrefix
— (Boolean
)Set this optional parameter to
true
to avoid adding a '0x' prefix to raw data in hexadecimal format. For example, by default, DMS adds a '0x' prefix to the LOB column type in hexadecimal format moving from an Oracle source to an Amazon Kinesis target. Use theNoHexPrefix
endpoint setting to enable migration of RAW data type columns without adding the '0x' prefix.
KafkaSettings
— (map
)The settings for the Apache Kafka target endpoint. For more information, see the
KafkaSettings
structure.Broker
— (String
)A comma-separated list of one or more broker locations in your Kafka cluster that host your Kafka instance. Specify each broker location in the form
broker-hostname-or-ip:port
. For example,"ec2-12-345-678-901.compute-1.amazonaws.com:2345"
. For more information and examples of specifying a list of broker locations, see Using Apache Kafka as a target for Database Migration Service in the Database Migration Service User Guide.Topic
— (String
)The topic to which you migrate the data. If you don't specify a topic, DMS specifies
"kafka-default-topic"
as the migration topic.MessageFormat
— (String
)The output format for the records created on the endpoint. The message format is
Possible values include:JSON
(default) orJSON_UNFORMATTED
(a single line with no tab)."json"
"json-unformatted"
IncludeTransactionDetails
— (Boolean
)Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for
transaction_id
, previoustransaction_id
, andtransaction_record_id
(the record offset within a transaction). The default isfalse
.IncludePartitionValue
— (Boolean
)Shows the partition value within the Kafka message output unless the partition type is
schema-table-type
. The default isfalse
.PartitionIncludeSchemaTable
— (Boolean
)Prefixes schema and table names to partition values, when the partition type is
primary-key-type
. Doing this increases data distribution among Kafka partitions. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same partition, which causes throttling. The default isfalse
.IncludeTableAlterOperations
— (Boolean
)Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table
,drop-table
,add-column
,drop-column
, andrename-column
. The default isfalse
.IncludeControlDetails
— (Boolean
)Shows detailed control information for table definition, column definition, and table and column changes in the Kafka message output. The default is
false
.MessageMaxBytes
— (Integer
)The maximum size in bytes for records created on the endpoint The default is 1,000,000.
IncludeNullAndEmpty
— (Boolean
)Include NULL and empty columns for records migrated to the endpoint. The default is
false
.SecurityProtocol
— (String
)Set secure connection to a Kafka target endpoint using Transport Layer Security (TLS). Options include
Possible values include:ssl-encryption
,ssl-authentication
, andsasl-ssl
.sasl-ssl
requiresSaslUsername
andSaslPassword
."plaintext"
"ssl-authentication"
"ssl-encryption"
"sasl-ssl"
SslClientCertificateArn
— (String
)The Amazon Resource Name (ARN) of the client certificate used to securely connect to a Kafka target endpoint.
SslClientKeyArn
— (String
)The Amazon Resource Name (ARN) for the client private key used to securely connect to a Kafka target endpoint.
SslClientKeyPassword
— (String
)The password for the client private key used to securely connect to a Kafka target endpoint.
SslCaCertificateArn
— (String
)The Amazon Resource Name (ARN) for the private certificate authority (CA) cert that DMS uses to securely connect to your Kafka target endpoint.
SaslUsername
— (String
)The secure user name you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.
SaslPassword
— (String
)The secure password you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.
NoHexPrefix
— (Boolean
)Set this optional parameter to
true
to avoid adding a '0x' prefix to raw data in hexadecimal format. For example, by default, DMS adds a '0x' prefix to the LOB column type in hexadecimal format moving from an Oracle source to a Kafka target. Use theNoHexPrefix
endpoint setting to enable migration of RAW data type columns without adding the '0x' prefix.
ElasticsearchSettings
— (map
)The settings for the Elasticsearch source endpoint. For more information, see the
ElasticsearchSettings
structure.ServiceAccessRoleArn
— required — (String
)The Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the
iam:PassRole
action.EndpointUri
— required — (String
)The endpoint for the Elasticsearch cluster. DMS uses HTTPS if a transport protocol (http/https) is not specified.
FullLoadErrorPercentage
— (Integer
)The maximum percentage of records that can fail to be written before a full load operation stops.
To avoid early failure, this counter is only effective after 1000 records are transferred. Elasticsearch also has the concept of error monitoring during the last 10 minutes of an Observation Window. If transfer of all records fail in the last 10 minutes, the full load operation stops.
ErrorRetryDuration
— (Integer
)The maximum number of seconds for which DMS retries failed API requests to the Elasticsearch cluster.
NeptuneSettings
— (map
)The settings for the Amazon Neptune target endpoint. For more information, see the
NeptuneSettings
structure.ServiceAccessRoleArn
— (String
)The Amazon Resource Name (ARN) of the service role that you created for the Neptune target endpoint. The role must allow the
iam:PassRole
action. For more information, see Creating an IAM Service Role for Accessing Amazon Neptune as a Target in the Database Migration Service User Guide.S3BucketName
— required — (String
)The name of the Amazon S3 bucket where DMS can temporarily store migrated graph data in .csv files before bulk-loading it to the Neptune target database. DMS maps the SQL source data to graph data before storing it in these .csv files.
S3BucketFolder
— required — (String
)A folder path where you want DMS to store migrated graph data in the S3 bucket specified by
S3BucketName
ErrorRetryDuration
— (Integer
)The number of milliseconds for DMS to wait to retry a bulk-load of migrated graph data to the Neptune target database before raising an error. The default is 250.
MaxFileSize
— (Integer
)The maximum size in kilobytes of migrated graph data stored in a .csv file before DMS bulk-loads the data to the Neptune target database. The default is 1,048,576 KB. If the bulk load is successful, DMS clears the bucket, ready to store the next batch of migrated graph data.
MaxRetryCount
— (Integer
)The number of times for DMS to retry a bulk load of migrated graph data to the Neptune target database before raising an error. The default is 5.
IamAuthEnabled
— (Boolean
)If you want Identity and Access Management (IAM) authorization enabled for this endpoint, set this parameter to
true
. Then attach the appropriate IAM policy document to your service role specified byServiceAccessRoleArn
. The default isfalse
.
RedshiftSettings
— (map
)Settings for the Amazon Redshift endpoint.
AcceptAnyDate
— (Boolean
)A value that indicates to allow any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error. You can choose
true
orfalse
(the default).This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data doesn't match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.
AfterConnectScript
— (String
)Code to run after connecting. This parameter should contain the code itself, not the name of a file containing the code.
BucketFolder
— (String
)An S3 folder where the comma-separated-value (.csv) files are stored before being uploaded to the target Redshift cluster.
For full load mode, DMS converts source records into .csv files and loads them to the BucketFolder/TableID path. DMS uses the Redshift
COPY
command to upload the .csv files to the target table. The files are deleted once theCOPY
operation has finished. For more information, see COPY in the Amazon Redshift Database Developer Guide.For change-data-capture (CDC) mode, DMS creates a NetChanges table, and loads the .csv files to this BucketFolder/NetChangesTableID path.
BucketName
— (String
)The name of the intermediate S3 bucket used to store .csv files before uploading data to Redshift.
CaseSensitiveNames
— (Boolean
)If Amazon Redshift is configured to support case sensitive schema names, set
CaseSensitiveNames
totrue
. The default isfalse
.CompUpdate
— (Boolean
)If you set
CompUpdate
totrue
Amazon Redshift applies automatic compression if the table is empty. This applies even if the table columns already have encodings other thanRAW
. If you setCompUpdate
tofalse
, automatic compression is disabled and existing column encodings aren't changed. The default istrue
.ConnectionTimeout
— (Integer
)A value that sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.
DatabaseName
— (String
)The name of the Amazon Redshift data warehouse (service) that you are working with.
DateFormat
— (String
)The date format that you are using. Valid values are
auto
(case-sensitive), your date format string enclosed in quotes, or NULL. If this parameter is left unset (NULL), it defaults to a format of 'YYYY-MM-DD'. Usingauto
recognizes most strings, even some that aren't supported when you use a date format string.If your date and time values use formats different from each other, set this to
auto
.EmptyAsNull
— (Boolean
)A value that specifies whether DMS should migrate empty CHAR and VARCHAR fields as NULL. A value of
true
sets empty CHAR and VARCHAR fields to null. The default isfalse
.EncryptionMode
— (String
)The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either
SSE_S3
(the default) orSSE_KMS
.Note: For theModifyEndpoint
operation, you can change the existing value of theEncryptionMode
parameter fromSSE_KMS
toSSE_S3
. But you can’t change the existing value fromSSE_S3
toSSE_KMS
.To use
Possible values include:SSE_S3
, create an Identity and Access Management (IAM) role with a policy that allows"arn:aws:s3:::*"
to use the following actions:"s3:PutObject", "s3:ListBucket"
"sse-s3"
"sse-kms"
ExplicitIds
— (Boolean
)This setting is only valid for a full-load migration task. Set
ExplicitIds
totrue
to have tables withIDENTITY
columns override their auto-generated values with explicit values loaded from the source data files used to populate the tables. The default isfalse
.FileTransferUploadStreams
— (Integer
)The number of threads used to upload a single file. This parameter accepts a value from 1 through 64. It defaults to 10.
The number of parallel streams used to upload a single .csv file to an S3 bucket using S3 Multipart Upload. For more information, see Multipart upload overview.
FileTransferUploadStreams
accepts a value from 1 through 64. It defaults to 10.LoadTimeout
— (Integer
)The amount of time to wait (in milliseconds) before timing out of operations performed by DMS on a Redshift cluster, such as Redshift COPY, INSERT, DELETE, and UPDATE.
MaxFileSize
— (Integer
)The maximum size (in KB) of any .csv file used to load data on an S3 bucket and transfer data to Amazon Redshift. It defaults to 1048576KB (1 GB).
Password
— (String
)The password for the user named in the
username
property.Port
— (Integer
)The port number for Amazon Redshift. The default value is 5439.
RemoveQuotes
— (Boolean
)A value that specifies to remove surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained. Choose
true
to remove quotation marks. The default isfalse
.ReplaceInvalidChars
— (String
)A list of characters that you want to replace. Use with
ReplaceChars
.ReplaceChars
— (String
)A value that specifies to replaces the invalid characters specified in
ReplaceInvalidChars
, substituting the specified characters instead. The default is"?"
.ServerName
— (String
)The name of the Amazon Redshift cluster you are using.
ServiceAccessRoleArn
— (String
)The Amazon Resource Name (ARN) of the IAM role that has access to the Amazon Redshift service. The role must allow the
iam:PassRole
action.ServerSideEncryptionKmsKeyId
— (String
)The KMS key ID. If you are using
SSE_KMS
for theEncryptionMode
, provide this key ID. The key that you use needs an attached policy that enables IAM user permissions and allows use of the key.TimeFormat
— (String
)The time format that you want to use. Valid values are
auto
(case-sensitive),'timeformat_string'
,'epochsecs'
, or'epochmillisecs'
. It defaults to 10. Usingauto
recognizes most strings, even some that aren't supported when you use a time format string.If your date and time values use formats different from each other, set this parameter to
auto
.TrimBlanks
— (Boolean
)A value that specifies to remove the trailing white space characters from a VARCHAR string. This parameter applies only to columns with a VARCHAR data type. Choose
true
to remove unneeded white space. The default isfalse
.TruncateColumns
— (Boolean
)A value that specifies to truncate data in columns to the appropriate number of characters, so that the data fits in the column. This parameter applies only to columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. Choose
true
to truncate data. The default isfalse
.Username
— (String
)An Amazon Redshift user name for a registered user.
WriteBufferSize
— (Integer
)The size (in KB) of the in-memory file write buffer used when generating .csv files on the local disk at the DMS replication instance. The default value is 1000 (buffer size is 1000KB).
SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the Amazon Redshift endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the Amazon Redshift endpoint connection details.
PostgreSQLSettings
— (map
)The settings for the PostgreSQL source and target endpoint. For more information, see the
PostgreSQLSettings
structure.AfterConnectScript
— (String
)For use with change data capture (CDC) only, this attribute has DMS bypass foreign keys and user triggers to reduce the time it takes to bulk load data.
Example:
afterConnectScript=SET session_replication_role='replica'
CaptureDdls
— (Boolean
)To capture DDL events, DMS creates various artifacts in the PostgreSQL database when the task starts. You can later remove these artifacts.
If this value is set to
N
, you don't have to create tables or triggers on the source database.MaxFileSize
— (Integer
)Specifies the maximum size (in KB) of any .csv file used to transfer data to PostgreSQL.
Example:
maxFileSize=512
DatabaseName
— (String
)Database name for the endpoint.
DdlArtifactsSchema
— (String
)The schema in which the operational DDL database artifacts are created.
Example:
ddlArtifactsSchema=xyzddlschema;
ExecuteTimeout
— (Integer
)Sets the client statement timeout for the PostgreSQL instance, in seconds. The default value is 60 seconds.
Example:
executeTimeout=100;
FailTasksOnLobTruncation
— (Boolean
)When set to
true
, this value causes a task to fail if the actual size of a LOB column is greater than the specifiedLobMaxSize
.If task is set to Limited LOB mode and this option is set to true, the task fails instead of truncating the LOB data.
HeartbeatEnable
— (Boolean
)The write-ahead log (WAL) heartbeat feature mimics a dummy transaction. By doing this, it prevents idle logical replication slots from holding onto old WAL logs, which can result in storage full situations on the source. This heartbeat keeps
restart_lsn
moving and prevents storage full scenarios.HeartbeatSchema
— (String
)Sets the schema in which the heartbeat artifacts are created.
HeartbeatFrequency
— (Integer
)Sets the WAL heartbeat frequency (in minutes).
Password
— (String
)Endpoint connection password.
Port
— (Integer
)Endpoint TCP port.
ServerName
— (String
)Fully qualified domain name of the endpoint.
Username
— (String
)Endpoint connection user name.
SlotName
— (String
)Sets the name of a previously created logical replication slot for a change data capture (CDC) load of the PostgreSQL source instance.
When used with the
CdcStartPosition
request parameter for the DMS API , this attribute also makes it possible to use native CDC start points. DMS verifies that the specified logical replication slot exists before starting the CDC load task. It also verifies that the task was created with a valid setting ofCdcStartPosition
. If the specified slot doesn't exist or the task doesn't have a validCdcStartPosition
setting, DMS raises an error.For more information about setting the
CdcStartPosition
request parameter, see Determining a CDC native start point in the Database Migration Service User Guide. For more information about usingCdcStartPosition
, see CreateReplicationTask, StartReplicationTask, and ModifyReplicationTask.PluginName
— (String
)Specifies the plugin to use to create a replication slot.
Possible values include:"no-preference"
"test-decoding"
"pglogical"
SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the PostgreSQL endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the PostgreSQL endpoint connection details.
MySQLSettings
— (map
)The settings for the MySQL source and target endpoint. For more information, see the
MySQLSettings
structure.AfterConnectScript
— (String
)Specifies a script to run immediately after DMS connects to the endpoint. The migration task continues running regardless if the SQL statement succeeds or fails.
For this parameter, provide the code of the script itself, not the name of a file containing the script.
CleanSourceMetadataOnMismatch
— (Boolean
)Adjusts the behavior of DMS when migrating from an SQL Server source database that is hosted as part of an Always On availability group cluster. If you need DMS to poll all the nodes in the Always On cluster for transaction backups, set this attribute to
false
.DatabaseName
— (String
)Database name for the endpoint. For a MySQL source or target endpoint, don't explicitly specify the database using the
DatabaseName
request parameter on either theCreateEndpoint
orModifyEndpoint
API call. SpecifyingDatabaseName
when you create or modify a MySQL endpoint replicates all the task tables to this single database. For MySQL endpoints, you specify the database only when you specify the schema in the table-mapping rules of the DMS task.EventsPollInterval
— (Integer
)Specifies how often to check the binary log for new changes/events when the database is idle.
Example:
eventsPollInterval=5;
In the example, DMS checks for changes in the binary logs every five seconds.
TargetDbType
— (String
)Specifies where to migrate source tables on the target, either to a single database or multiple databases.
Example:
Possible values include:targetDbType=MULTIPLE_DATABASES
"specific-database"
"multiple-databases"
MaxFileSize
— (Integer
)Specifies the maximum size (in KB) of any .csv file used to transfer data to a MySQL-compatible database.
Example:
maxFileSize=512
ParallelLoadThreads
— (Integer
)Improves performance when loading data into the MySQL-compatible target database. Specifies how many threads to use to load the data into the MySQL-compatible target database. Setting a large number of threads can have an adverse effect on database performance, because a separate connection is required for each thread.
Example:
parallelLoadThreads=1
Password
— (String
)Endpoint connection password.
Port
— (Integer
)Endpoint TCP port.
ServerName
— (String
)Fully qualified domain name of the endpoint.
ServerTimezone
— (String
)Specifies the time zone for the source MySQL database.
Example:
serverTimezone=US/Pacific;
Note: Do not enclose time zones in single quotes.
Username
— (String
)Endpoint connection user name.
SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the MySQL endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the MySQL endpoint connection details.
OracleSettings
— (map
)The settings for the Oracle source and target endpoint. For more information, see the
OracleSettings
structure.AddSupplementalLogging
— (Boolean
)Set this attribute to set up table-level supplemental logging for the Oracle database. This attribute enables PRIMARY KEY supplemental logging on all tables selected for a migration task.
If you use this option, you still need to enable database-level supplemental logging.
ArchivedLogDestId
— (Integer
)Specifies the ID of the destination for the archived redo logs. This value should be the same as a number in the dest_id column of the v$archived_log view. If you work with an additional redo log destination, use the
AdditionalArchivedLogDestId
option to specify the additional destination ID. Doing this improves performance by ensuring that the correct logs are accessed from the outset.AdditionalArchivedLogDestId
— (Integer
)Set this attribute with
ArchivedLogDestId
in a primary/ standby setup. This attribute is useful in the case of a switchover. In this case, DMS needs to know which destination to get archive redo logs from to read changes. This need arises because the previous primary instance is now a standby instance after switchover.Although DMS supports the use of the Oracle
RESETLOGS
option to open the database, never useRESETLOGS
unless necessary. For additional information aboutRESETLOGS
, see RMAN Data Repair Concepts in the Oracle Database Backup and Recovery User's Guide.ExtraArchivedLogDestIds
— (Array<Integer>
)Specifies the IDs of one more destinations for one or more archived redo logs. These IDs are the values of the
dest_id
column in thev$archived_log
view. Use this setting with thearchivedLogDestId
extra connection attribute in a primary-to-single setup or a primary-to-multiple-standby setup.This setting is useful in a switchover when you use an Oracle Data Guard database as a source. In this case, DMS needs information about what destination to get archive redo logs from to read changes. DMS needs this because after the switchover the previous primary is a standby instance. For example, in a primary-to-single standby setup you might apply the following settings.
archivedLogDestId=1; ExtraArchivedLogDestIds=[2]
In a primary-to-multiple-standby setup, you might apply the following settings.
archivedLogDestId=1; ExtraArchivedLogDestIds=[2,3,4]
Although DMS supports the use of the Oracle
RESETLOGS
option to open the database, never useRESETLOGS
unless it's necessary. For more information aboutRESETLOGS
, see RMAN Data Repair Concepts in the Oracle Database Backup and Recovery User's Guide.AllowSelectNestedTables
— (Boolean
)Set this attribute to
true
to enable replication of Oracle tables containing columns that are nested tables or defined types.ParallelAsmReadThreads
— (Integer
)Set this attribute to change the number of threads that DMS configures to perform a change data capture (CDC) load using Oracle Automatic Storage Management (ASM). You can specify an integer value between 2 (the default) and 8 (the maximum). Use this attribute together with the
readAheadBlocks
attribute.ReadAheadBlocks
— (Integer
)Set this attribute to change the number of read-ahead blocks that DMS configures to perform a change data capture (CDC) load using Oracle Automatic Storage Management (ASM). You can specify an integer value between 1000 (the default) and 200,000 (the maximum).
AccessAlternateDirectly
— (Boolean
)Set this attribute to
false
in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This tells the DMS instance to not access redo logs through any specified path prefix replacement using direct file access.UseAlternateFolderForOnline
— (Boolean
)Set this attribute to
true
in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This tells the DMS instance to use any specified prefix replacement to access all online redo logs.OraclePathPrefix
— (String
)Set this string attribute to the required value in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This value specifies the default Oracle root used to access the redo logs.
UsePathPrefix
— (String
)Set this string attribute to the required value in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This value specifies the path prefix used to replace the default Oracle root to access the redo logs.
ReplacePathPrefix
— (Boolean
)Set this attribute to true in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This setting tells DMS instance to replace the default Oracle root with the specified
usePathPrefix
setting to access the redo logs.EnableHomogenousTablespace
— (Boolean
)Set this attribute to enable homogenous tablespace replication and create existing tables or indexes under the same tablespace on the target.
DirectPathNoLog
— (Boolean
)When set to
true
, this attribute helps to increase the commit rate on the Oracle target database by writing directly to tables and not writing a trail to database logs.ArchivedLogsOnly
— (Boolean
)When this field is set to
Y
, DMS only accesses the archived redo logs. If the archived redo logs are stored on Oracle ASM only, the DMS user account needs to be granted ASM privileges.AsmPassword
— (String
)For an Oracle source endpoint, your Oracle Automatic Storage Management (ASM) password. You can set this value from the
asm_user_password
value. You set this value as part of the comma-separated value that you set to thePassword
request parameter when you create the endpoint to access transaction logs using Binary Reader. For more information, see Configuration for change data capture (CDC) on an Oracle source database.AsmServer
— (String
)For an Oracle source endpoint, your ASM server address. You can set this value from the
asm_server
value. You setasm_server
as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database.AsmUser
— (String
)For an Oracle source endpoint, your ASM user name. You can set this value from the
asm_user
value. You setasm_user
as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database.CharLengthSemantics
— (String
)Specifies whether the length of a character column is in bytes or in characters. To indicate that the character column length is in characters, set this attribute to
CHAR
. Otherwise, the character column length is in bytes.Example:
Possible values include:charLengthSemantics=CHAR;
"default"
"char"
"byte"
DatabaseName
— (String
)Database name for the endpoint.
DirectPathParallelLoad
— (Boolean
)When set to
true
, this attribute specifies a parallel load whenuseDirectPathFullLoad
is set toY
. This attribute also only applies when you use the DMS parallel load feature. Note that the target table cannot have any constraints or indexes.FailTasksOnLobTruncation
— (Boolean
)When set to
true
, this attribute causes a task to fail if the actual size of an LOB column is greater than the specifiedLobMaxSize
.If a task is set to limited LOB mode and this option is set to
true
, the task fails instead of truncating the LOB data.NumberDatatypeScale
— (Integer
)Specifies the number scale. You can select a scale up to 38, or you can select FLOAT. By default, the NUMBER data type is converted to precision 38, scale 10.
Example:
numberDataTypeScale=12
Password
— (String
)Endpoint connection password.
Port
— (Integer
)Endpoint TCP port.
ReadTableSpaceName
— (Boolean
)When set to
true
, this attribute supports tablespace replication.RetryInterval
— (Integer
)Specifies the number of seconds that the system waits before resending a query.
Example:
retryInterval=6;
SecurityDbEncryption
— (String
)For an Oracle source endpoint, the transparent data encryption (TDE) password required by AWM DMS to access Oracle redo logs encrypted by TDE using Binary Reader. It is also the
TDE_Password
part of the comma-separated value you set to thePassword
request parameter when you create the endpoint. TheSecurityDbEncryptian
setting is related to thisSecurityDbEncryptionName
setting. For more information, see Supported encryption methods for using Oracle as a source for DMS in the Database Migration Service User Guide.SecurityDbEncryptionName
— (String
)For an Oracle source endpoint, the name of a key used for the transparent data encryption (TDE) of the columns and tablespaces in an Oracle source database that is encrypted using TDE. The key value is the value of the
SecurityDbEncryption
setting. For more information on setting the key name value ofSecurityDbEncryptionName
, see the information and example for setting thesecurityDbEncryptionName
extra connection attribute in Supported encryption methods for using Oracle as a source for DMS in the Database Migration Service User Guide.ServerName
— (String
)Fully qualified domain name of the endpoint.
SpatialDataOptionToGeoJsonFunctionName
— (String
)Use this attribute to convert
SDO_GEOMETRY
toGEOJSON
format. By default, DMS calls theSDO2GEOJSON
custom function if present and accessible. Or you can create your own custom function that mimics the operation ofSDOGEOJSON
and setSpatialDataOptionToGeoJsonFunctionName
to call it instead.StandbyDelayTime
— (Integer
)Use this attribute to specify a time in minutes for the delay in standby sync. If the source is an Oracle Active Data Guard standby database, use this attribute to specify the time lag between primary and standby databases.
In DMS, you can create an Oracle CDC task that uses an Active Data Guard standby instance as a source for replicating ongoing changes. Doing this eliminates the need to connect to an active database that might be in production.
Username
— (String
)Endpoint connection user name.
UseBFile
— (Boolean
)Set this attribute to Y to capture change data using the Binary Reader utility. Set
UseLogminerReader
to N to set this attribute to Y. To use Binary Reader with Amazon RDS for Oracle as the source, you set additional attributes. For more information about using this setting with Oracle Automatic Storage Management (ASM), see Using Oracle LogMiner or DMS Binary Reader for CDC.UseDirectPathFullLoad
— (Boolean
)Set this attribute to Y to have DMS use a direct path full load. Specify this value to use the direct path protocol in the Oracle Call Interface (OCI). By using this OCI protocol, you can bulk-load Oracle target tables during a full load.
UseLogminerReader
— (Boolean
)Set this attribute to Y to capture change data using the Oracle LogMiner utility (the default). Set this attribute to N if you want to access the redo logs as a binary file. When you set
UseLogminerReader
to N, also setUseBfile
to Y. For more information on this setting and using Oracle ASM, see Using Oracle LogMiner or DMS Binary Reader for CDC in the DMS User Guide.SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the Oracle endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the Oracle endpoint connection details.SecretsManagerOracleAsmAccessRoleArn
— (String
)Required only if your Oracle endpoint uses Advanced Storage Manager (ASM). The full ARN of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the
SecretsManagerOracleAsmSecret
. ThisSecretsManagerOracleAsmSecret
has the secret value that allows access to the Oracle ASM of the endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerOracleAsmSecretId
. Or you can specify clear-text values forAsmUserName
,AsmPassword
, andAsmServerName
. You can't specify both. For more information on creating thisSecretsManagerOracleAsmSecret
and theSecretsManagerOracleAsmAccessRoleArn
andSecretsManagerOracleAsmSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerOracleAsmSecretId
— (String
)Required only if your Oracle endpoint uses Advanced Storage Manager (ASM). The full ARN, partial ARN, or friendly name of the
SecretsManagerOracleAsmSecret
that contains the Oracle ASM connection details for the Oracle endpoint.
SybaseSettings
— (map
)The settings for the SAP ASE source and target endpoint. For more information, see the
SybaseSettings
structure.DatabaseName
— (String
)Database name for the endpoint.
Password
— (String
)Endpoint connection password.
Port
— (Integer
)Endpoint TCP port.
ServerName
— (String
)Fully qualified domain name of the endpoint.
Username
— (String
)Endpoint connection user name.
SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the SAP ASE endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the SAP SAE endpoint connection details.
MicrosoftSQLServerSettings
— (map
)The settings for the Microsoft SQL Server source and target endpoint. For more information, see the
MicrosoftSQLServerSettings
structure.Port
— (Integer
)Endpoint TCP port.
BcpPacketSize
— (Integer
)The maximum size of the packets (in bytes) used to transfer data using BCP.
DatabaseName
— (String
)Database name for the endpoint.
ControlTablesFileGroup
— (String
)Specifies a file group for the DMS internal tables. When the replication task starts, all the internal DMS control tables (awsdms_ apply_exception, awsdms_apply, awsdms_changes) are created for the specified file group.
Password
— (String
)Endpoint connection password.
QuerySingleAlwaysOnNode
— (Boolean
)Cleans and recreates table metadata information on the replication instance when a mismatch occurs. An example is a situation where running an alter DDL statement on a table might result in different information about the table cached in the replication instance.
ReadBackupOnly
— (Boolean
)When this attribute is set to
Y
, DMS only reads changes from transaction log backups and doesn't read from the active transaction log file during ongoing replication. Setting this parameter toY
enables you to control active transaction log file growth during full load and ongoing replication tasks. However, it can add some source latency to ongoing replication.SafeguardPolicy
— (String
)Use this attribute to minimize the need to access the backup log and enable DMS to prevent truncation using one of the following two methods.
Start transactions in the database: This is the default method. When this method is used, DMS prevents TLOG truncation by mimicking a transaction in the database. As long as such a transaction is open, changes that appear after the transaction started aren't truncated. If you need Microsoft Replication to be enabled in your database, then you must choose this method.
Exclusively use sp_repldone within a single task: When this method is used, DMS reads the changes and then uses sp_repldone to mark the TLOG transactions as ready for truncation. Although this method doesn't involve any transactional activities, it can only be used when Microsoft Replication isn't running. Also, when using this method, only one DMS task can access the database at any given time. Therefore, if you need to run parallel DMS tasks against the same database, use the default method.
Possible values include:"rely-on-sql-server-replication-agent"
"exclusive-automatic-truncation"
"shared-automatic-truncation"
ServerName
— (String
)Fully qualified domain name of the endpoint.
Username
— (String
)Endpoint connection user name.
UseBcpFullLoad
— (Boolean
)Use this to attribute to transfer data for full-load operations using BCP. When the target table contains an identity column that does not exist in the source table, you must disable the use BCP for loading table option.
UseThirdPartyBackupDevice
— (Boolean
)When this attribute is set to
Y
, DMS processes third-party transaction log backups if they are created in native format.SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the SQL Server endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the SQL Server endpoint connection details.
IBMDb2Settings
— (map
)The settings for the IBM Db2 LUW source endpoint. For more information, see the
IBMDb2Settings
structure.DatabaseName
— (String
)Database name for the endpoint.
Password
— (String
)Endpoint connection password.
Port
— (Integer
)Endpoint TCP port. The default value is 50000.
ServerName
— (String
)Fully qualified domain name of the endpoint.
SetDataCaptureChanges
— (Boolean
)Enables ongoing replication (CDC) as a BOOLEAN value. The default is true.
CurrentLsn
— (String
)For ongoing replication (CDC), use CurrentLSN to specify a log sequence number (LSN) where you want the replication to start.
MaxKBytesPerRead
— (Integer
)Maximum number of bytes per read, as a NUMBER value. The default is 64 KB.
Username
— (String
)Endpoint connection user name.
SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the Db2 LUW endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the Db2 LUW endpoint connection details.
DocDbSettings
— (map
)Provides information that defines a DocumentDB endpoint.
Username
— (String
)The user name you use to access the DocumentDB source endpoint.
Password
— (String
)The password for the user account you use to access the DocumentDB source endpoint.
ServerName
— (String
)The name of the server on the DocumentDB source endpoint.
Port
— (Integer
)The port value for the DocumentDB source endpoint.
DatabaseName
— (String
)The database name on the DocumentDB source endpoint.
NestingLevel
— (String
)Specifies either document or table mode.
Default value is
Possible values include:"none"
. Specify"none"
to use document mode. Specify"one"
to use table mode."none"
"one"
ExtractDocId
— (Boolean
)Specifies the document ID. Use this setting when
NestingLevel
is set to"none"
.Default value is
"false"
.DocsToInvestigate
— (Integer
)Indicates the number of documents to preview to determine the document organization. Use this setting when
NestingLevel
is set to"one"
.Must be a positive value greater than
0
. Default value is1000
.KmsKeyId
— (String
)The KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the
KmsKeyId
parameter, then DMS uses your default encryption key. KMS creates the default encryption key for your Amazon Web Services account. Your Amazon Web Services account has a different default encryption key for each Amazon Web Services Region.SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the DocumentDB endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the DocumentDB endpoint connection details.
RedisSettings
— (map
)The settings for the Redis target endpoint. For more information, see the
RedisSettings
structure.ServerName
— required — (String
)Fully qualified domain name of the endpoint.
Port
— required — (Integer
)Transmission Control Protocol (TCP) port for the endpoint.
SslSecurityProtocol
— (String
)The connection to a Redis target endpoint using Transport Layer Security (TLS). Valid values include
plaintext
andssl-encryption
. The default isssl-encryption
. Thessl-encryption
option makes an encrypted connection. Optionally, you can identify an Amazon Resource Name (ARN) for an SSL certificate authority (CA) using theSslCaCertificateArn
setting. If an ARN isn't given for a CA, DMS uses the Amazon root CA.The
Possible values include:plaintext
option doesn't provide Transport Layer Security (TLS) encryption for traffic between endpoint and database."plaintext"
"ssl-encryption"
AuthType
— (String
)The type of authentication to perform when connecting to a Redis target. Options include
Possible values include:none
,auth-token
, andauth-role
. Theauth-token
option requires anAuthPassword
value to be provided. Theauth-role
option requiresAuthUserName
andAuthPassword
values to be provided."none"
"auth-role"
"auth-token"
AuthUserName
— (String
)The user name provided with the
auth-role
option of theAuthType
setting for a Redis target endpoint.AuthPassword
— (String
)The password provided with the
auth-role
andauth-token
options of theAuthType
setting for a Redis target endpoint.SslCaCertificateArn
— (String
)The Amazon Resource Name (ARN) for the certificate authority (CA) that DMS uses to connect to your Redis target endpoint.
-
(AWS.Response)
—
Returns:
modifyEventSubscription(params = {}, callback) ⇒ AWS.Request
Modifies an existing DMS event notification subscription.
Service Reference:
Examples:
Calling the modifyEventSubscription operation
var params = { SubscriptionName: 'STRING_VALUE', /* required */ Enabled: true || false, EventCategories: [ 'STRING_VALUE', /* more items */ ], SnsTopicArn: 'STRING_VALUE', SourceType: 'STRING_VALUE' }; dms.modifyEventSubscription(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
SubscriptionName
— (String
)The name of the DMS event notification subscription to be modified.
SnsTopicArn
— (String
)The Amazon Resource Name (ARN) of the Amazon SNS topic created for event notification. The ARN is created by Amazon SNS when you create a topic and subscribe to it.
SourceType
— (String
)The type of DMS resource that generates the events you want to subscribe to.
Valid values: replication-instance | replication-task
EventCategories
— (Array<String>
)A list of event categories for a source type that you want to subscribe to. Use the
DescribeEventCategories
action to see a list of event categories.Enabled
— (Boolean
)A Boolean value; set to true to activate the subscription.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:EventSubscription
— (map
)The modified event subscription.
CustomerAwsId
— (String
)The Amazon Web Services customer account associated with the DMS event notification subscription.
CustSubscriptionId
— (String
)The DMS event notification subscription Id.
SnsTopicArn
— (String
)The topic ARN of the DMS event notification subscription.
Status
— (String
)The status of the DMS event notification subscription.
Constraints:
Can be one of the following: creating | modifying | deleting | active | no-permission | topic-not-exist
The status "no-permission" indicates that DMS no longer has permission to post to the SNS topic. The status "topic-not-exist" indicates that the topic was deleted after the subscription was created.
SubscriptionCreationTime
— (String
)The time the DMS event notification subscription was created.
SourceType
— (String
)The type of DMS resource that generates events.
Valid values: replication-instance | replication-server | security-group | replication-task
SourceIdsList
— (Array<String>
)A list of source Ids for the event subscription.
EventCategoriesList
— (Array<String>
)A lists of event categories.
Enabled
— (Boolean
)Boolean value that indicates if the event subscription is enabled.
-
(AWS.Response)
—
Returns:
modifyReplicationInstance(params = {}, callback) ⇒ AWS.Request
Modifies the replication instance to apply new settings. You can change one or more parameters by specifying these parameters and the new values in the request.
Some settings are applied during the maintenance window.
Service Reference:
Examples:
Modify replication instance
/* Modifies the replication instance to apply new settings. You can change one or more parameters by specifying these parameters and the new values in the request. Some settings are applied during the maintenance window. */ var params = { AllocatedStorage: 123, AllowMajorVersionUpgrade: true, ApplyImmediately: true, AutoMinorVersionUpgrade: true, EngineVersion: "1.5.0", MultiAZ: true, PreferredMaintenanceWindow: "sun:06:00-sun:14:00", ReplicationInstanceArn: "arn:aws:dms:us-east-1:123456789012:rep:6UTDJGBOUS3VI3SUWA66XFJCJQ", ReplicationInstanceClass: "dms.t2.micro", ReplicationInstanceIdentifier: "test-rep-1", VpcSecurityGroupIds: [ ] }; dms.modifyReplicationInstance(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response /* data = { ReplicationInstance: { AllocatedStorage: 5, AutoMinorVersionUpgrade: true, EngineVersion: "1.5.0", KmsKeyId: "arn:aws:kms:us-east-1:123456789012:key/4c1731d6-5435-ed4d-be13-d53411a7cfbd", PendingModifiedValues: { }, PreferredMaintenanceWindow: "sun:06:00-sun:14:00", PubliclyAccessible: true, ReplicationInstanceArn: "arn:aws:dms:us-east-1:123456789012:rep:6UTDJGBOUS3VI3SUWA66XFJCJQ", ReplicationInstanceClass: "dms.t2.micro", ReplicationInstanceIdentifier: "test-rep-1", ReplicationInstanceStatus: "available", ReplicationSubnetGroup: { ReplicationSubnetGroupDescription: "default", ReplicationSubnetGroupIdentifier: "default", SubnetGroupStatus: "Complete", Subnets: [ { SubnetAvailabilityZone: { Name: "us-east-1d" }, SubnetIdentifier: "subnet-f6dd91af", SubnetStatus: "Active" }, { SubnetAvailabilityZone: { Name: "us-east-1b" }, SubnetIdentifier: "subnet-3605751d", SubnetStatus: "Active" }, { SubnetAvailabilityZone: { Name: "us-east-1c" }, SubnetIdentifier: "subnet-c2daefb5", SubnetStatus: "Active" }, { SubnetAvailabilityZone: { Name: "us-east-1e" }, SubnetIdentifier: "subnet-85e90cb8", SubnetStatus: "Active" } ], VpcId: "vpc-6741a603" } } } */ });
Calling the modifyReplicationInstance operation
var params = { ReplicationInstanceArn: 'STRING_VALUE', /* required */ AllocatedStorage: 'NUMBER_VALUE', AllowMajorVersionUpgrade: true || false, ApplyImmediately: true || false, AutoMinorVersionUpgrade: true || false, EngineVersion: 'STRING_VALUE', MultiAZ: true || false, PreferredMaintenanceWindow: 'STRING_VALUE', ReplicationInstanceClass: 'STRING_VALUE', ReplicationInstanceIdentifier: 'STRING_VALUE', VpcSecurityGroupIds: [ 'STRING_VALUE', /* more items */ ] }; dms.modifyReplicationInstance(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
ReplicationInstanceArn
— (String
)The Amazon Resource Name (ARN) of the replication instance.
AllocatedStorage
— (Integer
)The amount of storage (in gigabytes) to be allocated for the replication instance.
ApplyImmediately
— (Boolean
)Indicates whether the changes should be applied immediately or during the next maintenance window.
ReplicationInstanceClass
— (String
)The compute and memory capacity of the replication instance as defined for the specified replication instance class. For example to specify the instance class dms.c4.large, set this parameter to
"dms.c4.large"
.For more information on the settings and capacities for the available replication instance classes, see Selecting the right DMS replication instance for your migration.
VpcSecurityGroupIds
— (Array<String>
)Specifies the VPC security group to be used with the replication instance. The VPC security group must work with the VPC containing the replication instance.
PreferredMaintenanceWindow
— (String
)The weekly time range (in UTC) during which system maintenance can occur, which might result in an outage. Changing this parameter does not result in an outage, except in the following situation, and the change is asynchronously applied as soon as possible. If moving this window to the current time, there must be at least 30 minutes between the current time and end of the window to ensure pending changes are applied.
Default: Uses existing setting
Format: ddd:hh24:mi-ddd:hh24:mi
Valid Days: Mon | Tue | Wed | Thu | Fri | Sat | Sun
Constraints: Must be at least 30 minutes
MultiAZ
— (Boolean
)Specifies whether the replication instance is a Multi-AZ deployment. You can't set the
AvailabilityZone
parameter if the Multi-AZ parameter is set totrue
.EngineVersion
— (String
)The engine version number of the replication instance.
When modifying a major engine version of an instance, also set
AllowMajorVersionUpgrade
totrue
.AllowMajorVersionUpgrade
— (Boolean
)Indicates that major version upgrades are allowed. Changing this parameter does not result in an outage, and the change is asynchronously applied as soon as possible.
This parameter must be set to
true
when specifying a value for theEngineVersion
parameter that is a different major version than the replication instance's current version.AutoMinorVersionUpgrade
— (Boolean
)A value that indicates that minor version upgrades are applied automatically to the replication instance during the maintenance window. Changing this parameter doesn't result in an outage, except in the case described following. The change is asynchronously applied as soon as possible.
An outage does result if these factors apply:
-
This parameter is set to
true
during the maintenance window. -
A newer minor version is available.
-
DMS has enabled automatic patching for the given engine version.
-
ReplicationInstanceIdentifier
— (String
)The replication instance identifier. This parameter is stored as a lowercase string.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:ReplicationInstance
— (map
)The modified replication instance.
ReplicationInstanceIdentifier
— (String
)The replication instance identifier is a required parameter. This parameter is stored as a lowercase string.
Constraints:
-
Must contain 1-63 alphanumeric characters or hyphens.
-
First character must be a letter.
-
Cannot end with a hyphen or contain two consecutive hyphens.
Example:
myrepinstance
-
ReplicationInstanceClass
— (String
)The compute and memory capacity of the replication instance as defined for the specified replication instance class. It is a required parameter, although a default value is pre-selected in the DMS console.
For more information on the settings and capacities for the available replication instance classes, see Selecting the right DMS replication instance for your migration.
ReplicationInstanceStatus
— (String
)The status of the replication instance. The possible return values include:
-
"available"
-
"creating"
-
"deleted"
-
"deleting"
-
"failed"
-
"modifying"
-
"upgrading"
-
"rebooting"
-
"resetting-master-credentials"
-
"storage-full"
-
"incompatible-credentials"
-
"incompatible-network"
-
"maintenance"
-
AllocatedStorage
— (Integer
)The amount of storage (in gigabytes) that is allocated for the replication instance.
InstanceCreateTime
— (Date
)The time the replication instance was created.
VpcSecurityGroups
— (Array<map>
)The VPC security group for the instance.
VpcSecurityGroupId
— (String
)The VPC security group ID.
Status
— (String
)The status of the VPC security group.
AvailabilityZone
— (String
)The Availability Zone for the instance.
ReplicationSubnetGroup
— (map
)The subnet group for the replication instance.
ReplicationSubnetGroupIdentifier
— (String
)The identifier of the replication instance subnet group.
ReplicationSubnetGroupDescription
— (String
)A description for the replication subnet group.
VpcId
— (String
)The ID of the VPC.
SubnetGroupStatus
— (String
)The status of the subnet group.
Subnets
— (Array<map>
)The subnets that are in the subnet group.
SubnetIdentifier
— (String
)The subnet identifier.
SubnetAvailabilityZone
— (map
)The Availability Zone of the subnet.
Name
— (String
)The name of the Availability Zone.
SubnetStatus
— (String
)The status of the subnet.
PreferredMaintenanceWindow
— (String
)The maintenance window times for the replication instance. Any pending upgrades to the replication instance are performed during this time.
PendingModifiedValues
— (map
)The pending modification values.
ReplicationInstanceClass
— (String
)The compute and memory capacity of the replication instance as defined for the specified replication instance class.
For more information on the settings and capacities for the available replication instance classes, see Selecting the right DMS replication instance for your migration.
AllocatedStorage
— (Integer
)The amount of storage (in gigabytes) that is allocated for the replication instance.
MultiAZ
— (Boolean
)Specifies whether the replication instance is a Multi-AZ deployment. You can't set the
AvailabilityZone
parameter if the Multi-AZ parameter is set totrue
.EngineVersion
— (String
)The engine version number of the replication instance.
MultiAZ
— (Boolean
)Specifies whether the replication instance is a Multi-AZ deployment. You can't set the
AvailabilityZone
parameter if the Multi-AZ parameter is set totrue
.EngineVersion
— (String
)The engine version number of the replication instance.
If an engine version number is not specified when a replication instance is created, the default is the latest engine version available.
When modifying a major engine version of an instance, also set
AllowMajorVersionUpgrade
totrue
.AutoMinorVersionUpgrade
— (Boolean
)Boolean value indicating if minor version upgrades will be automatically applied to the instance.
KmsKeyId
— (String
)An KMS key identifier that is used to encrypt the data on the replication instance.
If you don't specify a value for the
KmsKeyId
parameter, then DMS uses your default encryption key.KMS creates the default encryption key for your Amazon Web Services account. Your Amazon Web Services account has a different default encryption key for each Amazon Web Services Region.
ReplicationInstanceArn
— (String
)The Amazon Resource Name (ARN) of the replication instance.
ReplicationInstancePublicIpAddress
— (String
)The public IP address of the replication instance.
ReplicationInstancePrivateIpAddress
— (String
)The private IP address of the replication instance.
ReplicationInstancePublicIpAddresses
— (Array<String>
)One or more public IP addresses for the replication instance.
ReplicationInstancePrivateIpAddresses
— (Array<String>
)One or more private IP addresses for the replication instance.
PubliclyAccessible
— (Boolean
)Specifies the accessibility options for the replication instance. A value of
true
represents an instance with a public IP address. A value offalse
represents an instance with a private IP address. The default value istrue
.SecondaryAvailabilityZone
— (String
)The Availability Zone of the standby replication instance in a Multi-AZ deployment.
FreeUntil
— (Date
)The expiration date of the free replication instance that is part of the Free DMS program.
DnsNameServers
— (String
)The DNS name servers supported for the replication instance to access your on-premise source or target database.
-
(AWS.Response)
—
Returns:
modifyReplicationSubnetGroup(params = {}, callback) ⇒ AWS.Request
Modifies the settings for the specified replication subnet group.
Service Reference:
Examples:
Modify replication subnet group
/* Modifies the settings for the specified replication subnet group. */ var params = { ReplicationSubnetGroupDescription: "", ReplicationSubnetGroupIdentifier: "", SubnetIds: [ ] }; dms.modifyReplicationSubnetGroup(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response /* data = { ReplicationSubnetGroup: { } } */ });
Calling the modifyReplicationSubnetGroup operation
var params = { ReplicationSubnetGroupIdentifier: 'STRING_VALUE', /* required */ SubnetIds: [ /* required */ 'STRING_VALUE', /* more items */ ], ReplicationSubnetGroupDescription: 'STRING_VALUE' }; dms.modifyReplicationSubnetGroup(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
ReplicationSubnetGroupIdentifier
— (String
)The name of the replication instance subnet group.
ReplicationSubnetGroupDescription
— (String
)A description for the replication instance subnet group.
SubnetIds
— (Array<String>
)A list of subnet IDs.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:ReplicationSubnetGroup
— (map
)The modified replication subnet group.
ReplicationSubnetGroupIdentifier
— (String
)The identifier of the replication instance subnet group.
ReplicationSubnetGroupDescription
— (String
)A description for the replication subnet group.
VpcId
— (String
)The ID of the VPC.
SubnetGroupStatus
— (String
)The status of the subnet group.
Subnets
— (Array<map>
)The subnets that are in the subnet group.
SubnetIdentifier
— (String
)The subnet identifier.
SubnetAvailabilityZone
— (map
)The Availability Zone of the subnet.
Name
— (String
)The name of the Availability Zone.
SubnetStatus
— (String
)The status of the subnet.
-
(AWS.Response)
—
Returns:
modifyReplicationTask(params = {}, callback) ⇒ AWS.Request
Modifies the specified replication task.
You can't modify the task endpoints. The task must be stopped before you can modify it.
For more information about DMS tasks, see Working with Migration Tasks in the Database Migration Service User Guide.
Service Reference:
Examples:
Calling the modifyReplicationTask operation
var params = { ReplicationTaskArn: 'STRING_VALUE', /* required */ CdcStartPosition: 'STRING_VALUE', CdcStartTime: new Date || 'Wed Dec 31 1969 16:00:00 GMT-0800 (PST)' || 123456789, CdcStopPosition: 'STRING_VALUE', MigrationType: full-load | cdc | full-load-and-cdc, ReplicationTaskIdentifier: 'STRING_VALUE', ReplicationTaskSettings: 'STRING_VALUE', TableMappings: 'STRING_VALUE', TaskData: 'STRING_VALUE' }; dms.modifyReplicationTask(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
ReplicationTaskArn
— (String
)The Amazon Resource Name (ARN) of the replication task.
ReplicationTaskIdentifier
— (String
)The replication task identifier.
Constraints:
-
Must contain 1-255 alphanumeric characters or hyphens.
-
First character must be a letter.
-
Cannot end with a hyphen or contain two consecutive hyphens.
-
MigrationType
— (String
)The migration type. Valid values:
Possible values include:full-load
|cdc
|full-load-and-cdc
"full-load"
"cdc"
"full-load-and-cdc"
TableMappings
— (String
)When using the CLI or boto3, provide the path of the JSON file that contains the table mappings. Precede the path with
file://
. For example,--table-mappings file://mappingfile.json
. When working with the DMS API, provide the JSON as the parameter value.ReplicationTaskSettings
— (String
)JSON file that contains settings for the task, such as task metadata settings.
CdcStartTime
— (Date
)Indicates the start time for a change data capture (CDC) operation. Use either CdcStartTime or CdcStartPosition to specify when you want a CDC operation to start. Specifying both values results in an error.
Timestamp Example: --cdc-start-time “2018-03-08T12:12:12”
CdcStartPosition
— (String
)Indicates when you want a change data capture (CDC) operation to start. Use either CdcStartPosition or CdcStartTime to specify when you want a CDC operation to start. Specifying both values results in an error.
The value can be in date, checkpoint, or LSN/SCN format.
Date Example: --cdc-start-position “2018-03-08T12:12:12”
Checkpoint Example: --cdc-start-position "checkpoint:V1#27#mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:1876#0#0#*#0#93"
LSN Example: --cdc-start-position “mysql-bin-changelog.000024:373”
Note: When you use this task setting with a source PostgreSQL database, a logical replication slot should already be created and associated with the source endpoint. You can verify this by setting theslotName
extra connection attribute to the name of this logical replication slot. For more information, see Extra Connection Attributes When Using PostgreSQL as a Source for DMS.CdcStopPosition
— (String
)Indicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.
Server time example: --cdc-stop-position “server_time:2018-02-09T12:12:12”
Commit time example: --cdc-stop-position “commit_time: 2018-02-09T12:12:12 “
TaskData
— (String
)Supplemental information that the task requires to migrate the data for certain source and target endpoints. For more information, see Specifying Supplemental Data for Task Settings in the Database Migration Service User Guide.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:ReplicationTask
— (map
)The replication task that was modified.
ReplicationTaskIdentifier
— (String
)The user-assigned replication task identifier or name.
Constraints:
-
Must contain 1-255 alphanumeric characters or hyphens.
-
First character must be a letter.
-
Cannot end with a hyphen or contain two consecutive hyphens.
-
SourceEndpointArn
— (String
)The Amazon Resource Name (ARN) that uniquely identifies the endpoint.
TargetEndpointArn
— (String
)The ARN that uniquely identifies the endpoint.
ReplicationInstanceArn
— (String
)The ARN of the replication instance.
MigrationType
— (String
)The type of migration.
Possible values include:"full-load"
"cdc"
"full-load-and-cdc"
TableMappings
— (String
)Table mappings specified in the task.
ReplicationTaskSettings
— (String
)The settings for the replication task.
Status
— (String
)The status of the replication task. This response parameter can return one of the following values:
-
"moving"
– The task is being moved in response to running theMoveReplicationTask
operation. -
"creating"
– The task is being created in response to running theCreateReplicationTask
operation. -
"deleting"
– The task is being deleted in response to running theDeleteReplicationTask
operation. -
"failed"
– The task failed to successfully complete the database migration in response to running theStartReplicationTask
operation. -
"failed-move"
– The task failed to move in response to running theMoveReplicationTask
operation. -
"modifying"
– The task definition is being modified in response to running theModifyReplicationTask
operation. -
"ready"
– The task is in aready
state where it can respond to other task operations, such asStartReplicationTask
orDeleteReplicationTask
. -
"running"
– The task is performing a database migration in response to running theStartReplicationTask
operation. -
"starting"
– The task is preparing to perform a database migration in response to running theStartReplicationTask
operation. -
"stopped"
– The task has stopped in response to running theStopReplicationTask
operation. -
"stopping"
– The task is preparing to stop in response to running theStopReplicationTask
operation. -
"testing"
– The database migration specified for this task is being tested in response to running either theStartReplicationTaskAssessmentRun
or theStartReplicationTaskAssessment
operation.Note:StartReplicationTaskAssessmentRun
is an improved premigration task assessment operation. TheStartReplicationTaskAssessment
operation assesses data type compatibility only between the source and target database of a given migration task. In contrast,StartReplicationTaskAssessmentRun
enables you to specify a variety of premigration task assessments in addition to data type compatibility. These assessments include ones for the validity of primary key definitions and likely issues with database migration performance, among others.
-
LastFailureMessage
— (String
)The last error (failure) message generated for the replication task.
StopReason
— (String
)The reason the replication task was stopped. This response parameter can return one of the following values:
-
"STOP_REASON_FULL_LOAD_COMPLETED"
– Full-load migration completed. -
"STOP_REASON_CACHED_CHANGES_APPLIED"
– Change data capture (CDC) load completed. -
"STOP_REASON_CACHED_CHANGES_NOT_APPLIED"
– In a full-load and CDC migration, the full load stopped as specified before starting the CDC migration. -
"STOP_REASON_SERVER_TIME"
– The migration stopped at the specified server time.
-
ReplicationTaskCreationDate
— (Date
)The date the replication task was created.
ReplicationTaskStartDate
— (Date
)The date the replication task is scheduled to start.
CdcStartPosition
— (String
)Indicates when you want a change data capture (CDC) operation to start. Use either
CdcStartPosition
orCdcStartTime
to specify when you want the CDC operation to start. Specifying both values results in an error.The value can be in date, checkpoint, or LSN/SCN format.
Date Example: --cdc-start-position “2018-03-08T12:12:12”
Checkpoint Example: --cdc-start-position "checkpoint:V1#27#mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:1876#0#0#*#0#93"
LSN Example: --cdc-start-position “mysql-bin-changelog.000024:373”
CdcStopPosition
— (String
)Indicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.
Server time example: --cdc-stop-position “server_time:2018-02-09T12:12:12”
Commit time example: --cdc-stop-position “commit_time: 2018-02-09T12:12:12 “
RecoveryCheckpoint
— (String
)Indicates the last checkpoint that occurred during a change data capture (CDC) operation. You can provide this value to the
CdcStartPosition
parameter to start a CDC operation that begins at that checkpoint.ReplicationTaskArn
— (String
)The Amazon Resource Name (ARN) of the replication task.
ReplicationTaskStats
— (map
)The statistics for the task, including elapsed time, tables loaded, and table errors.
FullLoadProgressPercent
— (Integer
)The percent complete for the full load migration task.
ElapsedTimeMillis
— (Integer
)The elapsed time of the task, in milliseconds.
TablesLoaded
— (Integer
)The number of tables loaded for this task.
TablesLoading
— (Integer
)The number of tables currently loading for this task.
TablesQueued
— (Integer
)The number of tables queued for this task.
TablesErrored
— (Integer
)The number of errors that have occurred during this task.
FreshStartDate
— (Date
)The date the replication task was started either with a fresh start or a target reload.
StartDate
— (Date
)The date the replication task was started either with a fresh start or a resume. For more information, see StartReplicationTaskType.
StopDate
— (Date
)The date the replication task was stopped.
FullLoadStartDate
— (Date
)The date the replication task full load was started.
FullLoadFinishDate
— (Date
)The date the replication task full load was completed.
TaskData
— (String
)Supplemental information that the task requires to migrate the data for certain source and target endpoints. For more information, see Specifying Supplemental Data for Task Settings in the Database Migration Service User Guide.
TargetReplicationInstanceArn
— (String
)The ARN of the replication instance to which this task is moved in response to running the
MoveReplicationTask
operation. Otherwise, this response parameter isn't a member of theReplicationTask
object.
-
(AWS.Response)
—
Returns:
moveReplicationTask(params = {}, callback) ⇒ AWS.Request
Moves a replication task from its current replication instance to a different target replication instance using the specified parameters. The target replication instance must be created with the same or later DMS version as the current replication instance.
Service Reference:
Examples:
Calling the moveReplicationTask operation
var params = { ReplicationTaskArn: 'STRING_VALUE', /* required */ TargetReplicationInstanceArn: 'STRING_VALUE' /* required */ }; dms.moveReplicationTask(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
ReplicationTaskArn
— (String
)The Amazon Resource Name (ARN) of the task that you want to move.
TargetReplicationInstanceArn
— (String
)The ARN of the replication instance where you want to move the task to.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:ReplicationTask
— (map
)The replication task that was moved.
ReplicationTaskIdentifier
— (String
)The user-assigned replication task identifier or name.
Constraints:
-
Must contain 1-255 alphanumeric characters or hyphens.
-
First character must be a letter.
-
Cannot end with a hyphen or contain two consecutive hyphens.
-
SourceEndpointArn
— (String
)The Amazon Resource Name (ARN) that uniquely identifies the endpoint.
TargetEndpointArn
— (String
)The ARN that uniquely identifies the endpoint.
ReplicationInstanceArn
— (String
)The ARN of the replication instance.
MigrationType
— (String
)The type of migration.
Possible values include:"full-load"
"cdc"
"full-load-and-cdc"
TableMappings
— (String
)Table mappings specified in the task.
ReplicationTaskSettings
— (String
)The settings for the replication task.
Status
— (String
)The status of the replication task. This response parameter can return one of the following values:
-
"moving"
– The task is being moved in response to running theMoveReplicationTask
operation. -
"creating"
– The task is being created in response to running theCreateReplicationTask
operation. -
"deleting"
– The task is being deleted in response to running theDeleteReplicationTask
operation. -
"failed"
– The task failed to successfully complete the database migration in response to running theStartReplicationTask
operation. -
"failed-move"
– The task failed to move in response to running theMoveReplicationTask
operation. -
"modifying"
– The task definition is being modified in response to running theModifyReplicationTask
operation. -
"ready"
– The task is in aready
state where it can respond to other task operations, such asStartReplicationTask
orDeleteReplicationTask
. -
"running"
– The task is performing a database migration in response to running theStartReplicationTask
operation. -
"starting"
– The task is preparing to perform a database migration in response to running theStartReplicationTask
operation. -
"stopped"
– The task has stopped in response to running theStopReplicationTask
operation. -
"stopping"
– The task is preparing to stop in response to running theStopReplicationTask
operation. -
"testing"
– The database migration specified for this task is being tested in response to running either theStartReplicationTaskAssessmentRun
or theStartReplicationTaskAssessment
operation.Note:StartReplicationTaskAssessmentRun
is an improved premigration task assessment operation. TheStartReplicationTaskAssessment
operation assesses data type compatibility only between the source and target database of a given migration task. In contrast,StartReplicationTaskAssessmentRun
enables you to specify a variety of premigration task assessments in addition to data type compatibility. These assessments include ones for the validity of primary key definitions and likely issues with database migration performance, among others.
-
LastFailureMessage
— (String
)The last error (failure) message generated for the replication task.
StopReason
— (String
)The reason the replication task was stopped. This response parameter can return one of the following values:
-
"STOP_REASON_FULL_LOAD_COMPLETED"
– Full-load migration completed. -
"STOP_REASON_CACHED_CHANGES_APPLIED"
– Change data capture (CDC) load completed. -
"STOP_REASON_CACHED_CHANGES_NOT_APPLIED"
– In a full-load and CDC migration, the full load stopped as specified before starting the CDC migration. -
"STOP_REASON_SERVER_TIME"
– The migration stopped at the specified server time.
-
ReplicationTaskCreationDate
— (Date
)The date the replication task was created.
ReplicationTaskStartDate
— (Date
)The date the replication task is scheduled to start.
CdcStartPosition
— (String
)Indicates when you want a change data capture (CDC) operation to start. Use either
CdcStartPosition
orCdcStartTime
to specify when you want the CDC operation to start. Specifying both values results in an error.The value can be in date, checkpoint, or LSN/SCN format.
Date Example: --cdc-start-position “2018-03-08T12:12:12”
Checkpoint Example: --cdc-start-position "checkpoint:V1#27#mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:1876#0#0#*#0#93"
LSN Example: --cdc-start-position “mysql-bin-changelog.000024:373”
CdcStopPosition
— (String
)Indicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.
Server time example: --cdc-stop-position “server_time:2018-02-09T12:12:12”
Commit time example: --cdc-stop-position “commit_time: 2018-02-09T12:12:12 “
RecoveryCheckpoint
— (String
)Indicates the last checkpoint that occurred during a change data capture (CDC) operation. You can provide this value to the
CdcStartPosition
parameter to start a CDC operation that begins at that checkpoint.ReplicationTaskArn
— (String
)The Amazon Resource Name (ARN) of the replication task.
ReplicationTaskStats
— (map
)The statistics for the task, including elapsed time, tables loaded, and table errors.
FullLoadProgressPercent
— (Integer
)The percent complete for the full load migration task.
ElapsedTimeMillis
— (Integer
)The elapsed time of the task, in milliseconds.
TablesLoaded
— (Integer
)The number of tables loaded for this task.
TablesLoading
— (Integer
)The number of tables currently loading for this task.
TablesQueued
— (Integer
)The number of tables queued for this task.
TablesErrored
— (Integer
)The number of errors that have occurred during this task.
FreshStartDate
— (Date
)The date the replication task was started either with a fresh start or a target reload.
StartDate
— (Date
)The date the replication task was started either with a fresh start or a resume. For more information, see StartReplicationTaskType.
StopDate
— (Date
)The date the replication task was stopped.
FullLoadStartDate
— (Date
)The date the replication task full load was started.
FullLoadFinishDate
— (Date
)The date the replication task full load was completed.
TaskData
— (String
)Supplemental information that the task requires to migrate the data for certain source and target endpoints. For more information, see Specifying Supplemental Data for Task Settings in the Database Migration Service User Guide.
TargetReplicationInstanceArn
— (String
)The ARN of the replication instance to which this task is moved in response to running the
MoveReplicationTask
operation. Otherwise, this response parameter isn't a member of theReplicationTask
object.
-
(AWS.Response)
—
Returns:
rebootReplicationInstance(params = {}, callback) ⇒ AWS.Request
Reboots a replication instance. Rebooting results in a momentary outage, until the replication instance becomes available again.
Service Reference:
Examples:
Calling the rebootReplicationInstance operation
var params = { ReplicationInstanceArn: 'STRING_VALUE', /* required */ ForceFailover: true || false, ForcePlannedFailover: true || false }; dms.rebootReplicationInstance(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
ReplicationInstanceArn
— (String
)The Amazon Resource Name (ARN) of the replication instance.
ForceFailover
— (Boolean
)If this parameter is
true
, the reboot is conducted through a Multi-AZ failover. If the instance isn't configured for Multi-AZ, then you can't specifytrue
. (--force-planned-failover
and--force-failover
can't both be set totrue
.)ForcePlannedFailover
— (Boolean
)If this parameter is
true
, the reboot is conducted through a planned Multi-AZ failover where resources are released and cleaned up prior to conducting the failover. If the instance isn''t configured for Multi-AZ, then you can't specifytrue
. (--force-planned-failover
and--force-failover
can't both be set totrue
.)
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:ReplicationInstance
— (map
)The replication instance that is being rebooted.
ReplicationInstanceIdentifier
— (String
)The replication instance identifier is a required parameter. This parameter is stored as a lowercase string.
Constraints:
-
Must contain 1-63 alphanumeric characters or hyphens.
-
First character must be a letter.
-
Cannot end with a hyphen or contain two consecutive hyphens.
Example:
myrepinstance
-
ReplicationInstanceClass
— (String
)The compute and memory capacity of the replication instance as defined for the specified replication instance class. It is a required parameter, although a default value is pre-selected in the DMS console.
For more information on the settings and capacities for the available replication instance classes, see Selecting the right DMS replication instance for your migration.
ReplicationInstanceStatus
— (String
)The status of the replication instance. The possible return values include:
-
"available"
-
"creating"
-
"deleted"
-
"deleting"
-
"failed"
-
"modifying"
-
"upgrading"
-
"rebooting"
-
"resetting-master-credentials"
-
"storage-full"
-
"incompatible-credentials"
-
"incompatible-network"
-
"maintenance"
-
AllocatedStorage
— (Integer
)The amount of storage (in gigabytes) that is allocated for the replication instance.
InstanceCreateTime
— (Date
)The time the replication instance was created.
VpcSecurityGroups
— (Array<map>
)The VPC security group for the instance.
VpcSecurityGroupId
— (String
)The VPC security group ID.
Status
— (String
)The status of the VPC security group.
AvailabilityZone
— (String
)The Availability Zone for the instance.
ReplicationSubnetGroup
— (map
)The subnet group for the replication instance.
ReplicationSubnetGroupIdentifier
— (String
)The identifier of the replication instance subnet group.
ReplicationSubnetGroupDescription
— (String
)A description for the replication subnet group.
VpcId
— (String
)The ID of the VPC.
SubnetGroupStatus
— (String
)The status of the subnet group.
Subnets
— (Array<map>
)The subnets that are in the subnet group.
SubnetIdentifier
— (String
)The subnet identifier.
SubnetAvailabilityZone
— (map
)The Availability Zone of the subnet.
Name
— (String
)The name of the Availability Zone.
SubnetStatus
— (String
)The status of the subnet.
PreferredMaintenanceWindow
— (String
)The maintenance window times for the replication instance. Any pending upgrades to the replication instance are performed during this time.
PendingModifiedValues
— (map
)The pending modification values.
ReplicationInstanceClass
— (String
)The compute and memory capacity of the replication instance as defined for the specified replication instance class.
For more information on the settings and capacities for the available replication instance classes, see Selecting the right DMS replication instance for your migration.
AllocatedStorage
— (Integer
)The amount of storage (in gigabytes) that is allocated for the replication instance.
MultiAZ
— (Boolean
)Specifies whether the replication instance is a Multi-AZ deployment. You can't set the
AvailabilityZone
parameter if the Multi-AZ parameter is set totrue
.EngineVersion
— (String
)The engine version number of the replication instance.
MultiAZ
— (Boolean
)Specifies whether the replication instance is a Multi-AZ deployment. You can't set the
AvailabilityZone
parameter if the Multi-AZ parameter is set totrue
.EngineVersion
— (String
)The engine version number of the replication instance.
If an engine version number is not specified when a replication instance is created, the default is the latest engine version available.
When modifying a major engine version of an instance, also set
AllowMajorVersionUpgrade
totrue
.AutoMinorVersionUpgrade
— (Boolean
)Boolean value indicating if minor version upgrades will be automatically applied to the instance.
KmsKeyId
— (String
)An KMS key identifier that is used to encrypt the data on the replication instance.
If you don't specify a value for the
KmsKeyId
parameter, then DMS uses your default encryption key.KMS creates the default encryption key for your Amazon Web Services account. Your Amazon Web Services account has a different default encryption key for each Amazon Web Services Region.
ReplicationInstanceArn
— (String
)The Amazon Resource Name (ARN) of the replication instance.
ReplicationInstancePublicIpAddress
— (String
)The public IP address of the replication instance.
ReplicationInstancePrivateIpAddress
— (String
)The private IP address of the replication instance.
ReplicationInstancePublicIpAddresses
— (Array<String>
)One or more public IP addresses for the replication instance.
ReplicationInstancePrivateIpAddresses
— (Array<String>
)One or more private IP addresses for the replication instance.
PubliclyAccessible
— (Boolean
)Specifies the accessibility options for the replication instance. A value of
true
represents an instance with a public IP address. A value offalse
represents an instance with a private IP address. The default value istrue
.SecondaryAvailabilityZone
— (String
)The Availability Zone of the standby replication instance in a Multi-AZ deployment.
FreeUntil
— (Date
)The expiration date of the free replication instance that is part of the Free DMS program.
DnsNameServers
— (String
)The DNS name servers supported for the replication instance to access your on-premise source or target database.
-
(AWS.Response)
—
Returns:
refreshSchemas(params = {}, callback) ⇒ AWS.Request
Populates the schema for the specified endpoint. This is an asynchronous operation and can take several minutes. You can check the status of this operation by calling the DescribeRefreshSchemasStatus operation.
Service Reference:
Examples:
Refresh schema
/* Populates the schema for the specified endpoint. This is an asynchronous operation and can take several minutes. You can check the status of this operation by calling the describe-refresh-schemas-status operation. */ var params = { EndpointArn: "", ReplicationInstanceArn: "" }; dms.refreshSchemas(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response /* data = { RefreshSchemasStatus: { } } */ });
Calling the refreshSchemas operation
var params = { EndpointArn: 'STRING_VALUE', /* required */ ReplicationInstanceArn: 'STRING_VALUE' /* required */ }; dms.refreshSchemas(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
EndpointArn
— (String
)The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
ReplicationInstanceArn
— (String
)The Amazon Resource Name (ARN) of the replication instance.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:RefreshSchemasStatus
— (map
)The status of the refreshed schema.
EndpointArn
— (String
)The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
ReplicationInstanceArn
— (String
)The Amazon Resource Name (ARN) of the replication instance.
Status
— (String
)The status of the schema.
Possible values include:"successful"
"failed"
"refreshing"
LastRefreshDate
— (Date
)The date the schema was last refreshed.
LastFailureMessage
— (String
)The last failure message for the schema.
-
(AWS.Response)
—
Returns:
reloadTables(params = {}, callback) ⇒ AWS.Request
Reloads the target database table with the source data.
You can only use this operation with a task in the
RUNNING
state, otherwise the service will throw anInvalidResourceStateFault
exception.Service Reference:
Examples:
Calling the reloadTables operation
var params = { ReplicationTaskArn: 'STRING_VALUE', /* required */ TablesToReload: [ /* required */ { SchemaName: 'STRING_VALUE', /* required */ TableName: 'STRING_VALUE' /* required */ }, /* more items */ ], ReloadOption: data-reload | validate-only }; dms.reloadTables(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
ReplicationTaskArn
— (String
)The Amazon Resource Name (ARN) of the replication task.
TablesToReload
— (Array<map>
)The name and schema of the table to be reloaded.
SchemaName
— required — (String
)The schema name of the table to be reloaded.
TableName
— required — (String
)The table name of the table to be reloaded.
ReloadOption
— (String
)Options for reload. Specify
data-reload
to reload the data and re-validate it if validation is enabled. Specifyvalidate-only
to re-validate the table. This option applies only when validation is enabled for the task.Valid values: data-reload, validate-only
Default value is data-reload.
Possible values include:"data-reload"
"validate-only"
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:ReplicationTaskArn
— (String
)The Amazon Resource Name (ARN) of the replication task.
-
(AWS.Response)
—
Returns:
removeTagsFromResource(params = {}, callback) ⇒ AWS.Request
Removes metadata tags from an DMS resource, including replication instance, endpoint, security group, and migration task. For more information, see
Tag
data type description.Service Reference:
Examples:
Remove tags from resource
/* Removes metadata tags from an AWS DMS resource. */ var params = { ResourceArn: "arn:aws:dms:us-east-1:123456789012:endpoint:ASXWXJZLNWNT5HTWCGV2BUJQ7E", TagKeys: [ ] }; dms.removeTagsFromResource(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response /* data = { } */ });
Calling the removeTagsFromResource operation
var params = { ResourceArn: 'STRING_VALUE', /* required */ TagKeys: [ /* required */ 'STRING_VALUE', /* more items */ ] }; dms.removeTagsFromResource(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
ResourceArn
— (String
)An DMS resource from which you want to remove tag(s). The value for this parameter is an Amazon Resource Name (ARN).
TagKeys
— (Array<String>
)The tag key (name) of the tag to be removed.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs.
-
(AWS.Response)
—
Returns:
startReplicationTask(params = {}, callback) ⇒ AWS.Request
Starts the replication task.
For more information about DMS tasks, see Working with Migration Tasks in the Database Migration Service User Guide.
Service Reference:
Examples:
Start replication task
/* Starts the replication task. */ var params = { CdcStartTime: <Date Representation>, ReplicationTaskArn: "arn:aws:dms:us-east-1:123456789012:rep:6UTDJGBOUS3VI3SUWA66XFJCJQ", StartReplicationTaskType: "start-replication" }; dms.startReplicationTask(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response /* data = { ReplicationTask: { MigrationType: "full-load", ReplicationInstanceArn: "arn:aws:dms:us-east-1:123456789012:rep:6UTDJGBOUS3VI3SUWA66XFJCJQ", ReplicationTaskArn: "arn:aws:dms:us-east-1:123456789012:task:OEAMB3NXSTZ6LFYZFEPPBBXPYM", ReplicationTaskCreationDate: <Date Representation>, ReplicationTaskIdentifier: "task1", ReplicationTaskSettings: "{\"TargetMetadata\":{\"TargetSchema\":\"\",\"SupportLobs\":true,\"FullLobMode\":true,\"LobChunkSize\":64,\"LimitedSizeLobMode\":false,\"LobMaxSize\":0},\"FullLoadSettings\":{\"FullLoadEnabled\":true,\"ApplyChangesEnabled\":false,\"TargetTablePrepMode\":\"DROP_AND_CREATE\",\"CreatePkAfterFullLoad\":false,\"StopTaskCachedChangesApplied\":false,\"StopTaskCachedChangesNotApplied\":false,\"ResumeEnabled\":false,\"ResumeMinTableSize\":100000,\"ResumeOnlyClusteredPKTables\":true,\"MaxFullLoadSubTasks\":8,\"TransactionConsistencyTimeout\":600,\"CommitRate\":10000},\"Logging\":{\"EnableLogging\":false}}", SourceEndpointArn: "arn:aws:dms:us-east-1:123456789012:endpoint:ZW5UAN6P4E77EC7YWHK4RZZ3BE", Status: "creating", TableMappings: "file://mappingfile.json", TargetEndpointArn: "arn:aws:dms:us-east-1:123456789012:endpoint:ASXWXJZLNWNT5HTWCGV2BUJQ7E" } } */ });
Calling the startReplicationTask operation
var params = { ReplicationTaskArn: 'STRING_VALUE', /* required */ StartReplicationTaskType: start-replication | resume-processing | reload-target, /* required */ CdcStartPosition: 'STRING_VALUE', CdcStartTime: new Date || 'Wed Dec 31 1969 16:00:00 GMT-0800 (PST)' || 123456789, CdcStopPosition: 'STRING_VALUE' }; dms.startReplicationTask(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
ReplicationTaskArn
— (String
)The Amazon Resource Name (ARN) of the replication task to be started.
StartReplicationTaskType
— (String
)A type of replication task.
Possible values include:"start-replication"
"resume-processing"
"reload-target"
CdcStartTime
— (Date
)Indicates the start time for a change data capture (CDC) operation. Use either CdcStartTime or CdcStartPosition to specify when you want a CDC operation to start. Specifying both values results in an error.
Timestamp Example: --cdc-start-time “2018-03-08T12:12:12”
CdcStartPosition
— (String
)Indicates when you want a change data capture (CDC) operation to start. Use either CdcStartPosition or CdcStartTime to specify when you want a CDC operation to start. Specifying both values results in an error.
The value can be in date, checkpoint, or LSN/SCN format.
Date Example: --cdc-start-position “2018-03-08T12:12:12”
Checkpoint Example: --cdc-start-position "checkpoint:V1#27#mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:1876#0#0#*#0#93"
LSN Example: --cdc-start-position “mysql-bin-changelog.000024:373”
Note: When you use this task setting with a source PostgreSQL database, a logical replication slot should already be created and associated with the source endpoint. You can verify this by setting theslotName
extra connection attribute to the name of this logical replication slot. For more information, see Extra Connection Attributes When Using PostgreSQL as a Source for DMS.CdcStopPosition
— (String
)Indicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.
Server time example: --cdc-stop-position “server_time:2018-02-09T12:12:12”
Commit time example: --cdc-stop-position “commit_time: 2018-02-09T12:12:12 “
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:ReplicationTask
— (map
)The replication task started.
ReplicationTaskIdentifier
— (String
)The user-assigned replication task identifier or name.
Constraints:
-
Must contain 1-255 alphanumeric characters or hyphens.
-
First character must be a letter.
-
Cannot end with a hyphen or contain two consecutive hyphens.
-
SourceEndpointArn
— (String
)The Amazon Resource Name (ARN) that uniquely identifies the endpoint.
TargetEndpointArn
— (String
)The ARN that uniquely identifies the endpoint.
ReplicationInstanceArn
— (String
)The ARN of the replication instance.
MigrationType
— (String
)The type of migration.
Possible values include:"full-load"
"cdc"
"full-load-and-cdc"
TableMappings
— (String
)Table mappings specified in the task.
ReplicationTaskSettings
— (String
)The settings for the replication task.
Status
— (String
)The status of the replication task. This response parameter can return one of the following values:
-
"moving"
– The task is being moved in response to running theMoveReplicationTask
operation. -
"creating"
– The task is being created in response to running theCreateReplicationTask
operation. -
"deleting"
– The task is being deleted in response to running theDeleteReplicationTask
operation. -
"failed"
– The task failed to successfully complete the database migration in response to running theStartReplicationTask
operation. -
"failed-move"
– The task failed to move in response to running theMoveReplicationTask
operation. -
"modifying"
– The task definition is being modified in response to running theModifyReplicationTask
operation. -
"ready"
– The task is in aready
state where it can respond to other task operations, such asStartReplicationTask
orDeleteReplicationTask
. -
"running"
– The task is performing a database migration in response to running theStartReplicationTask
operation. -
"starting"
– The task is preparing to perform a database migration in response to running theStartReplicationTask
operation. -
"stopped"
– The task has stopped in response to running theStopReplicationTask
operation. -
"stopping"
– The task is preparing to stop in response to running theStopReplicationTask
operation. -
"testing"
– The database migration specified for this task is being tested in response to running either theStartReplicationTaskAssessmentRun
or theStartReplicationTaskAssessment
operation.Note:StartReplicationTaskAssessmentRun
is an improved premigration task assessment operation. TheStartReplicationTaskAssessment
operation assesses data type compatibility only between the source and target database of a given migration task. In contrast,StartReplicationTaskAssessmentRun
enables you to specify a variety of premigration task assessments in addition to data type compatibility. These assessments include ones for the validity of primary key definitions and likely issues with database migration performance, among others.
-
LastFailureMessage
— (String
)The last error (failure) message generated for the replication task.
StopReason
— (String
)The reason the replication task was stopped. This response parameter can return one of the following values:
-
"STOP_REASON_FULL_LOAD_COMPLETED"
– Full-load migration completed. -
"STOP_REASON_CACHED_CHANGES_APPLIED"
– Change data capture (CDC) load completed. -
"STOP_REASON_CACHED_CHANGES_NOT_APPLIED"
– In a full-load and CDC migration, the full load stopped as specified before starting the CDC migration. -
"STOP_REASON_SERVER_TIME"
– The migration stopped at the specified server time.
-
ReplicationTaskCreationDate
— (Date
)The date the replication task was created.
ReplicationTaskStartDate
— (Date
)The date the replication task is scheduled to start.
CdcStartPosition
— (String
)Indicates when you want a change data capture (CDC) operation to start. Use either
CdcStartPosition
orCdcStartTime
to specify when you want the CDC operation to start. Specifying both values results in an error.The value can be in date, checkpoint, or LSN/SCN format.
Date Example: --cdc-start-position “2018-03-08T12:12:12”
Checkpoint Example: --cdc-start-position "checkpoint:V1#27#mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:1876#0#0#*#0#93"
LSN Example: --cdc-start-position “mysql-bin-changelog.000024:373”
CdcStopPosition
— (String
)Indicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.
Server time example: --cdc-stop-position “server_time:2018-02-09T12:12:12”
Commit time example: --cdc-stop-position “commit_time: 2018-02-09T12:12:12 “
RecoveryCheckpoint
— (String
)Indicates the last checkpoint that occurred during a change data capture (CDC) operation. You can provide this value to the
CdcStartPosition
parameter to start a CDC operation that begins at that checkpoint.ReplicationTaskArn
— (String
)The Amazon Resource Name (ARN) of the replication task.
ReplicationTaskStats
— (map
)The statistics for the task, including elapsed time, tables loaded, and table errors.
FullLoadProgressPercent
— (Integer
)The percent complete for the full load migration task.
ElapsedTimeMillis
— (Integer
)The elapsed time of the task, in milliseconds.
TablesLoaded
— (Integer
)The number of tables loaded for this task.
TablesLoading
— (Integer
)The number of tables currently loading for this task.
TablesQueued
— (Integer
)The number of tables queued for this task.
TablesErrored
— (Integer
)The number of errors that have occurred during this task.
FreshStartDate
— (Date
)The date the replication task was started either with a fresh start or a target reload.
StartDate
— (Date
)The date the replication task was started either with a fresh start or a resume. For more information, see StartReplicationTaskType.
StopDate
— (Date
)The date the replication task was stopped.
FullLoadStartDate
— (Date
)The date the replication task full load was started.
FullLoadFinishDate
— (Date
)The date the replication task full load was completed.
TaskData
— (String
)Supplemental information that the task requires to migrate the data for certain source and target endpoints. For more information, see Specifying Supplemental Data for Task Settings in the Database Migration Service User Guide.
TargetReplicationInstanceArn
— (String
)The ARN of the replication instance to which this task is moved in response to running the
MoveReplicationTask
operation. Otherwise, this response parameter isn't a member of theReplicationTask
object.
-
(AWS.Response)
—
Returns:
startReplicationTaskAssessment(params = {}, callback) ⇒ AWS.Request
Starts the replication task assessment for unsupported data types in the source database.
Service Reference:
Examples:
Calling the startReplicationTaskAssessment operation
var params = { ReplicationTaskArn: 'STRING_VALUE' /* required */ }; dms.startReplicationTaskAssessment(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
ReplicationTaskArn
— (String
)The Amazon Resource Name (ARN) of the replication task.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:ReplicationTask
— (map
)The assessed replication task.
ReplicationTaskIdentifier
— (String
)The user-assigned replication task identifier or name.
Constraints:
-
Must contain 1-255 alphanumeric characters or hyphens.
-
First character must be a letter.
-
Cannot end with a hyphen or contain two consecutive hyphens.
-
SourceEndpointArn
— (String
)The Amazon Resource Name (ARN) that uniquely identifies the endpoint.
TargetEndpointArn
— (String
)The ARN that uniquely identifies the endpoint.
ReplicationInstanceArn
— (String
)The ARN of the replication instance.
MigrationType
— (String
)The type of migration.
Possible values include:"full-load"
"cdc"
"full-load-and-cdc"
TableMappings
— (String
)Table mappings specified in the task.
ReplicationTaskSettings
— (String
)The settings for the replication task.
Status
— (String
)The status of the replication task. This response parameter can return one of the following values:
-
"moving"
– The task is being moved in response to running theMoveReplicationTask
operation. -
"creating"
– The task is being created in response to running theCreateReplicationTask
operation. -
"deleting"
– The task is being deleted in response to running theDeleteReplicationTask
operation. -
"failed"
– The task failed to successfully complete the database migration in response to running theStartReplicationTask
operation. -
"failed-move"
– The task failed to move in response to running theMoveReplicationTask
operation. -
"modifying"
– The task definition is being modified in response to running theModifyReplicationTask
operation. -
"ready"
– The task is in aready
state where it can respond to other task operations, such asStartReplicationTask
orDeleteReplicationTask
. -
"running"
– The task is performing a database migration in response to running theStartReplicationTask
operation. -
"starting"
– The task is preparing to perform a database migration in response to running theStartReplicationTask
operation. -
"stopped"
– The task has stopped in response to running theStopReplicationTask
operation. -
"stopping"
– The task is preparing to stop in response to running theStopReplicationTask
operation. -
"testing"
– The database migration specified for this task is being tested in response to running either theStartReplicationTaskAssessmentRun
or theStartReplicationTaskAssessment
operation.Note:StartReplicationTaskAssessmentRun
is an improved premigration task assessment operation. TheStartReplicationTaskAssessment
operation assesses data type compatibility only between the source and target database of a given migration task. In contrast,StartReplicationTaskAssessmentRun
enables you to specify a variety of premigration task assessments in addition to data type compatibility. These assessments include ones for the validity of primary key definitions and likely issues with database migration performance, among others.
-
LastFailureMessage
— (String
)The last error (failure) message generated for the replication task.
StopReason
— (String
)The reason the replication task was stopped. This response parameter can return one of the following values:
-
"STOP_REASON_FULL_LOAD_COMPLETED"
– Full-load migration completed. -
"STOP_REASON_CACHED_CHANGES_APPLIED"
– Change data capture (CDC) load completed. -
"STOP_REASON_CACHED_CHANGES_NOT_APPLIED"
– In a full-load and CDC migration, the full load stopped as specified before starting the CDC migration. -
"STOP_REASON_SERVER_TIME"
– The migration stopped at the specified server time.
-
ReplicationTaskCreationDate
— (Date
)The date the replication task was created.
ReplicationTaskStartDate
— (Date
)The date the replication task is scheduled to start.
CdcStartPosition
— (String
)Indicates when you want a change data capture (CDC) operation to start. Use either
CdcStartPosition
orCdcStartTime
to specify when you want the CDC operation to start. Specifying both values results in an error.The value can be in date, checkpoint, or LSN/SCN format.
Date Example: --cdc-start-position “2018-03-08T12:12:12”
Checkpoint Example: --cdc-start-position "checkpoint:V1#27#mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:1876#0#0#*#0#93"
LSN Example: --cdc-start-position “mysql-bin-changelog.000024:373”
CdcStopPosition
— (String
)Indicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.
Server time example: --cdc-stop-position “server_time:2018-02-09T12:12:12”
Commit time example: --cdc-stop-position “commit_time: 2018-02-09T12:12:12 “
RecoveryCheckpoint
— (String
)Indicates the last checkpoint that occurred during a change data capture (CDC) operation. You can provide this value to the
CdcStartPosition
parameter to start a CDC operation that begins at that checkpoint.ReplicationTaskArn
— (String
)The Amazon Resource Name (ARN) of the replication task.
ReplicationTaskStats
— (map
)The statistics for the task, including elapsed time, tables loaded, and table errors.
FullLoadProgressPercent
— (Integer
)The percent complete for the full load migration task.
ElapsedTimeMillis
— (Integer
)The elapsed time of the task, in milliseconds.
TablesLoaded
— (Integer
)The number of tables loaded for this task.
TablesLoading
— (Integer
)The number of tables currently loading for this task.
TablesQueued
— (Integer
)The number of tables queued for this task.
TablesErrored
— (Integer
)The number of errors that have occurred during this task.
FreshStartDate
— (Date
)The date the replication task was started either with a fresh start or a target reload.
StartDate
— (Date
)The date the replication task was started either with a fresh start or a resume. For more information, see StartReplicationTaskType.
StopDate
— (Date
)The date the replication task was stopped.
FullLoadStartDate
— (Date
)The date the replication task full load was started.
FullLoadFinishDate
— (Date
)The date the replication task full load was completed.
TaskData
— (String
)Supplemental information that the task requires to migrate the data for certain source and target endpoints. For more information, see Specifying Supplemental Data for Task Settings in the Database Migration Service User Guide.
TargetReplicationInstanceArn
— (String
)The ARN of the replication instance to which this task is moved in response to running the
MoveReplicationTask
operation. Otherwise, this response parameter isn't a member of theReplicationTask
object.
-
(AWS.Response)
—
Returns:
startReplicationTaskAssessmentRun(params = {}, callback) ⇒ AWS.Request
Starts a new premigration assessment run for one or more individual assessments of a migration task.
The assessments that you can specify depend on the source and target database engine and the migration type defined for the given task. To run this operation, your migration task must already be created. After you run this operation, you can review the status of each individual assessment. You can also run the migration task manually after the assessment run and its individual assessments complete.
Service Reference:
Examples:
Calling the startReplicationTaskAssessmentRun operation
var params = { AssessmentRunName: 'STRING_VALUE', /* required */ ReplicationTaskArn: 'STRING_VALUE', /* required */ ResultLocationBucket: 'STRING_VALUE', /* required */ ServiceAccessRoleArn: 'STRING_VALUE', /* required */ Exclude: [ 'STRING_VALUE', /* more items */ ], IncludeOnly: [ 'STRING_VALUE', /* more items */ ], ResultEncryptionMode: 'STRING_VALUE', ResultKmsKeyArn: 'STRING_VALUE', ResultLocationFolder: 'STRING_VALUE' }; dms.startReplicationTaskAssessmentRun(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
ReplicationTaskArn
— (String
)Amazon Resource Name (ARN) of the migration task associated with the premigration assessment run that you want to start.
ServiceAccessRoleArn
— (String
)ARN of the service role needed to start the assessment run. The role must allow the
iam:PassRole
action.ResultLocationBucket
— (String
)Amazon S3 bucket where you want DMS to store the results of this assessment run.
ResultLocationFolder
— (String
)Folder within an Amazon S3 bucket where you want DMS to store the results of this assessment run.
ResultEncryptionMode
— (String
)Encryption mode that you can specify to encrypt the results of this assessment run. If you don't specify this request parameter, DMS stores the assessment run results without encryption. You can specify one of the options following:
-
"SSE_S3"
– The server-side encryption provided as a default by Amazon S3. -
"SSE_KMS"
– Key Management Service (KMS) encryption. This encryption can use either a custom KMS encryption key that you specify or the default KMS encryption key that DMS provides.
-
ResultKmsKeyArn
— (String
)ARN of a custom KMS encryption key that you specify when you set
ResultEncryptionMode
to"SSE_KMS
".AssessmentRunName
— (String
)Unique name to identify the assessment run.
IncludeOnly
— (Array<String>
)Space-separated list of names for specific individual assessments that you want to include. These names come from the default list of individual assessments that DMS supports for the associated migration task. This task is specified by
ReplicationTaskArn
.Note: You can't set a value forIncludeOnly
if you also set a value forExclude
in the API operation. To identify the names of the default individual assessments that DMS supports for the associated migration task, run theDescribeApplicableIndividualAssessments
operation using its ownReplicationTaskArn
request parameter.Exclude
— (Array<String>
)Space-separated list of names for specific individual assessments that you want to exclude. These names come from the default list of individual assessments that DMS supports for the associated migration task. This task is specified by
ReplicationTaskArn
.Note: You can't set a value forExclude
if you also set a value forIncludeOnly
in the API operation. To identify the names of the default individual assessments that DMS supports for the associated migration task, run theDescribeApplicableIndividualAssessments
operation using its ownReplicationTaskArn
request parameter.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:ReplicationTaskAssessmentRun
— (map
)The premigration assessment run that was started.
ReplicationTaskAssessmentRunArn
— (String
)Amazon Resource Name (ARN) of this assessment run.
ReplicationTaskArn
— (String
)ARN of the migration task associated with this premigration assessment run.
Status
— (String
)Assessment run status.
This status can have one of the following values:
-
"cancelling"
– The assessment run was canceled by theCancelReplicationTaskAssessmentRun
operation. -
"deleting"
– The assessment run was deleted by theDeleteReplicationTaskAssessmentRun
operation. -
"failed"
– At least one individual assessment completed with afailed
status. -
"error-provisioning"
– An internal error occurred while resources were provisioned (duringprovisioning
status). -
"error-executing"
– An internal error occurred while individual assessments ran (duringrunning
status). -
"invalid state"
– The assessment run is in an unknown state. -
"passed"
– All individual assessments have completed, and none has afailed
status. -
"provisioning"
– Resources required to run individual assessments are being provisioned. -
"running"
– Individual assessments are being run. -
"starting"
– The assessment run is starting, but resources are not yet being provisioned for individual assessments.
-
ReplicationTaskAssessmentRunCreationDate
— (Date
)Date on which the assessment run was created using the
StartReplicationTaskAssessmentRun
operation.AssessmentProgress
— (map
)Indication of the completion progress for the individual assessments specified to run.
IndividualAssessmentCount
— (Integer
)The number of individual assessments that are specified to run.
IndividualAssessmentCompletedCount
— (Integer
)The number of individual assessments that have completed, successfully or not.
LastFailureMessage
— (String
)Last message generated by an individual assessment failure.
ServiceAccessRoleArn
— (String
)ARN of the service role used to start the assessment run using the
StartReplicationTaskAssessmentRun
operation. The role must allow theiam:PassRole
action.ResultLocationBucket
— (String
)Amazon S3 bucket where DMS stores the results of this assessment run.
ResultLocationFolder
— (String
)Folder in an Amazon S3 bucket where DMS stores the results of this assessment run.
ResultEncryptionMode
— (String
)Encryption mode used to encrypt the assessment run results.
ResultKmsKeyArn
— (String
)ARN of the KMS encryption key used to encrypt the assessment run results.
AssessmentRunName
— (String
)Unique name of the assessment run.
-
(AWS.Response)
—
Returns:
stopReplicationTask(params = {}, callback) ⇒ AWS.Request
Stops the replication task.
Service Reference:
Examples:
Stop replication task
/* Stops the replication task. */ var params = { ReplicationTaskArn: "arn:aws:dms:us-east-1:123456789012:endpoint:ASXWXJZLNWNT5HTWCGV2BUJQ7E" }; dms.stopReplicationTask(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response /* data = { ReplicationTask: { MigrationType: "full-load", ReplicationInstanceArn: "arn:aws:dms:us-east-1:123456789012:rep:6UTDJGBOUS3VI3SUWA66XFJCJQ", ReplicationTaskArn: "arn:aws:dms:us-east-1:123456789012:task:OEAMB3NXSTZ6LFYZFEPPBBXPYM", ReplicationTaskCreationDate: <Date Representation>, ReplicationTaskIdentifier: "task1", ReplicationTaskSettings: "{\"TargetMetadata\":{\"TargetSchema\":\"\",\"SupportLobs\":true,\"FullLobMode\":true,\"LobChunkSize\":64,\"LimitedSizeLobMode\":false,\"LobMaxSize\":0},\"FullLoadSettings\":{\"FullLoadEnabled\":true,\"ApplyChangesEnabled\":false,\"TargetTablePrepMode\":\"DROP_AND_CREATE\",\"CreatePkAfterFullLoad\":false,\"StopTaskCachedChangesApplied\":false,\"StopTaskCachedChangesNotApplied\":false,\"ResumeEnabled\":false,\"ResumeMinTableSize\":100000,\"ResumeOnlyClusteredPKTables\":true,\"MaxFullLoadSubTasks\":8,\"TransactionConsistencyTimeout\":600,\"CommitRate\":10000},\"Logging\":{\"EnableLogging\":false}}", SourceEndpointArn: "arn:aws:dms:us-east-1:123456789012:endpoint:ZW5UAN6P4E77EC7YWHK4RZZ3BE", Status: "creating", TableMappings: "file://mappingfile.json", TargetEndpointArn: "arn:aws:dms:us-east-1:123456789012:endpoint:ASXWXJZLNWNT5HTWCGV2BUJQ7E" } } */ });
Calling the stopReplicationTask operation
var params = { ReplicationTaskArn: 'STRING_VALUE' /* required */ }; dms.stopReplicationTask(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
ReplicationTaskArn
— (String
)The Amazon Resource Name(ARN) of the replication task to be stopped.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:ReplicationTask
— (map
)The replication task stopped.
ReplicationTaskIdentifier
— (String
)The user-assigned replication task identifier or name.
Constraints:
-
Must contain 1-255 alphanumeric characters or hyphens.
-
First character must be a letter.
-
Cannot end with a hyphen or contain two consecutive hyphens.
-
SourceEndpointArn
— (String
)The Amazon Resource Name (ARN) that uniquely identifies the endpoint.
TargetEndpointArn
— (String
)The ARN that uniquely identifies the endpoint.
ReplicationInstanceArn
— (String
)The ARN of the replication instance.
MigrationType
— (String
)The type of migration.
Possible values include:"full-load"
"cdc"
"full-load-and-cdc"
TableMappings
— (String
)Table mappings specified in the task.
ReplicationTaskSettings
— (String
)The settings for the replication task.
Status
— (String
)The status of the replication task. This response parameter can return one of the following values:
-
"moving"
– The task is being moved in response to running theMoveReplicationTask
operation. -
"creating"
– The task is being created in response to running theCreateReplicationTask
operation. -
"deleting"
– The task is being deleted in response to running theDeleteReplicationTask
operation. -
"failed"
– The task failed to successfully complete the database migration in response to running theStartReplicationTask
operation. -
"failed-move"
– The task failed to move in response to running theMoveReplicationTask
operation. -
"modifying"
– The task definition is being modified in response to running theModifyReplicationTask
operation. -
"ready"
– The task is in aready
state where it can respond to other task operations, such asStartReplicationTask
orDeleteReplicationTask
. -
"running"
– The task is performing a database migration in response to running theStartReplicationTask
operation. -
"starting"
– The task is preparing to perform a database migration in response to running theStartReplicationTask
operation. -
"stopped"
– The task has stopped in response to running theStopReplicationTask
operation. -
"stopping"
– The task is preparing to stop in response to running theStopReplicationTask
operation. -
"testing"
– The database migration specified for this task is being tested in response to running either theStartReplicationTaskAssessmentRun
or theStartReplicationTaskAssessment
operation.Note:StartReplicationTaskAssessmentRun
is an improved premigration task assessment operation. TheStartReplicationTaskAssessment
operation assesses data type compatibility only between the source and target database of a given migration task. In contrast,StartReplicationTaskAssessmentRun
enables you to specify a variety of premigration task assessments in addition to data type compatibility. These assessments include ones for the validity of primary key definitions and likely issues with database migration performance, among others.
-
LastFailureMessage
— (String
)The last error (failure) message generated for the replication task.
StopReason
— (String
)The reason the replication task was stopped. This response parameter can return one of the following values:
-
"STOP_REASON_FULL_LOAD_COMPLETED"
– Full-load migration completed. -
"STOP_REASON_CACHED_CHANGES_APPLIED"
– Change data capture (CDC) load completed. -
"STOP_REASON_CACHED_CHANGES_NOT_APPLIED"
– In a full-load and CDC migration, the full load stopped as specified before starting the CDC migration. -
"STOP_REASON_SERVER_TIME"
– The migration stopped at the specified server time.
-
ReplicationTaskCreationDate
— (Date
)The date the replication task was created.
ReplicationTaskStartDate
— (Date
)The date the replication task is scheduled to start.
CdcStartPosition
— (String
)Indicates when you want a change data capture (CDC) operation to start. Use either
CdcStartPosition
orCdcStartTime
to specify when you want the CDC operation to start. Specifying both values results in an error.The value can be in date, checkpoint, or LSN/SCN format.
Date Example: --cdc-start-position “2018-03-08T12:12:12”
Checkpoint Example: --cdc-start-position "checkpoint:V1#27#mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:1876#0#0#*#0#93"
LSN Example: --cdc-start-position “mysql-bin-changelog.000024:373”
CdcStopPosition
— (String
)Indicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.
Server time example: --cdc-stop-position “server_time:2018-02-09T12:12:12”
Commit time example: --cdc-stop-position “commit_time: 2018-02-09T12:12:12 “
RecoveryCheckpoint
— (String
)Indicates the last checkpoint that occurred during a change data capture (CDC) operation. You can provide this value to the
CdcStartPosition
parameter to start a CDC operation that begins at that checkpoint.ReplicationTaskArn
— (String
)The Amazon Resource Name (ARN) of the replication task.
ReplicationTaskStats
— (map
)The statistics for the task, including elapsed time, tables loaded, and table errors.
FullLoadProgressPercent
— (Integer
)The percent complete for the full load migration task.
ElapsedTimeMillis
— (Integer
)The elapsed time of the task, in milliseconds.
TablesLoaded
— (Integer
)The number of tables loaded for this task.
TablesLoading
— (Integer
)The number of tables currently loading for this task.
TablesQueued
— (Integer
)The number of tables queued for this task.
TablesErrored
— (Integer
)The number of errors that have occurred during this task.
FreshStartDate
— (Date
)The date the replication task was started either with a fresh start or a target reload.
StartDate
— (Date
)The date the replication task was started either with a fresh start or a resume. For more information, see StartReplicationTaskType.
StopDate
— (Date
)The date the replication task was stopped.
FullLoadStartDate
— (Date
)The date the replication task full load was started.
FullLoadFinishDate
— (Date
)The date the replication task full load was completed.
TaskData
— (String
)Supplemental information that the task requires to migrate the data for certain source and target endpoints. For more information, see Specifying Supplemental Data for Task Settings in the Database Migration Service User Guide.
TargetReplicationInstanceArn
— (String
)The ARN of the replication instance to which this task is moved in response to running the
MoveReplicationTask
operation. Otherwise, this response parameter isn't a member of theReplicationTask
object.
-
(AWS.Response)
—
Returns:
testConnection(params = {}, callback) ⇒ AWS.Request
Tests the connection between the replication instance and the endpoint.
Service Reference:
Examples:
Test conection
/* Tests the connection between the replication instance and the endpoint. */ var params = { EndpointArn: "arn:aws:dms:us-east-1:123456789012:endpoint:RAAR3R22XSH46S3PWLC3NJAWKM", ReplicationInstanceArn: "arn:aws:dms:us-east-1:123456789012:rep:6UTDJGBOUS3VI3SUWA66XFJCJQ" }; dms.testConnection(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response /* data = { Connection: { } } */ });
Calling the testConnection operation
var params = { EndpointArn: 'STRING_VALUE', /* required */ ReplicationInstanceArn: 'STRING_VALUE' /* required */ }; dms.testConnection(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
(defaults to: {})
—
ReplicationInstanceArn
— (String
)The Amazon Resource Name (ARN) of the replication instance.
EndpointArn
— (String
)The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:Connection
— (map
)The connection tested.
ReplicationInstanceArn
— (String
)The ARN of the replication instance.
EndpointArn
— (String
)The ARN string that uniquely identifies the endpoint.
Status
— (String
)The connection status. This parameter can return one of the following values:
-
"successful"
-
"testing"
-
"failed"
-
"deleting"
-
LastFailureMessage
— (String
)The error message when the connection last failed.
EndpointIdentifier
— (String
)The identifier of the endpoint. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens.
ReplicationInstanceIdentifier
— (String
)The replication instance identifier. This parameter is stored as a lowercase string.
-
(AWS.Response)
—
Returns:
waitFor(state, params = {}, callback) ⇒ AWS.Request
Waits for a given DMS resource. The final callback or 'complete' event will be fired only when the resource is either in its final state or the waiter has timed out and stopped polling for the final state.
Examples:
Waiting for the testConnectionSucceeds state
var params = { // ... input parameters ... }; dms.waitFor('testConnectionSucceeds', params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
state
(String)
—
the resource state to wait for. Available states for this service are listed in "Waiter Resource States" below.
-
params
(map)
(defaults to: {})
—
a list of parameters for the given state. See each waiter resource state for required parameters.
Callback (callback):
-
function(err, data) { ... }
Callback containing error and data information. See the respective resource state for the expected error or data information.
If the waiter times out its requests, it will return a
ResourceNotReady
error.
Returns:
Waiter Resource States:
Waiter Resource Details
dms.waitFor('testConnectionSucceeds', params = {}, [callback]) ⇒ AWS.Request
Waits for the
testConnectionSucceeds
state by periodically calling the underlying DMS.describeConnections() operation every 5 seconds (at most 60 times).Examples:
Waiting for the testConnectionSucceeds state
var params = { // ... input parameters ... }; dms.waitFor('testConnectionSucceeds', params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
—
Filters
— (Array<map>
)The filters applied to the connection.
Valid filter names: endpoint-arn | replication-instance-arn
Name
— required — (String
)The name of the filter as specified for a
Describe*
or similar operation.Values
— required — (Array<String>
)The filter value, which can specify one or more values used to narrow the returned results.
MaxRecords
— (Integer
)The maximum number of records to include in the response. If more records exist than the specified
MaxRecords
value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.Default: 100
Constraints: Minimum 20, maximum 100.
Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.Connections
— (Array<map>
)A description of the connections.
ReplicationInstanceArn
— (String
)The ARN of the replication instance.
EndpointArn
— (String
)The ARN string that uniquely identifies the endpoint.
Status
— (String
)The connection status. This parameter can return one of the following values:
-
"successful"
-
"testing"
-
"failed"
-
"deleting"
-
LastFailureMessage
— (String
)The error message when the connection last failed.
EndpointIdentifier
— (String
)The identifier of the endpoint. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens.
ReplicationInstanceIdentifier
— (String
)The replication instance identifier. This parameter is stored as a lowercase string.
-
(AWS.Response)
—
Returns:
See Also:
dms.waitFor('endpointDeleted', params = {}, [callback]) ⇒ AWS.Request
Waits for the
endpointDeleted
state by periodically calling the underlying DMS.describeEndpoints() operation every 5 seconds (at most 60 times).Examples:
Waiting for the endpointDeleted state
var params = { // ... input parameters ... }; dms.waitFor('endpointDeleted', params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
—
Filters
— (Array<map>
)Filters applied to the endpoints.
Valid filter names: endpoint-arn | endpoint-type | endpoint-id | engine-name
Name
— required — (String
)The name of the filter as specified for a
Describe*
or similar operation.Values
— required — (Array<String>
)The filter value, which can specify one or more values used to narrow the returned results.
MaxRecords
— (Integer
)The maximum number of records to include in the response. If more records exist than the specified
MaxRecords
value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.Default: 100
Constraints: Minimum 20, maximum 100.
Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.Endpoints
— (Array<map>
)Endpoint description.
EndpointIdentifier
— (String
)The database endpoint identifier. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens.
EndpointType
— (String
)The type of endpoint. Valid values are
Possible values include:source
andtarget
."source"
"target"
EngineName
— (String
)The database engine name. Valid values, depending on the EndpointType, include
"mysql"
,"oracle"
,"postgres"
,"mariadb"
,"aurora"
,"aurora-postgresql"
,"redshift"
,"s3"
,"db2"
,"azuredb"
,"sybase"
,"dynamodb"
,"mongodb"
,"kinesis"
,"kafka"
,"elasticsearch"
,"documentdb"
,"sqlserver"
, and"neptune"
.EngineDisplayName
— (String
)The expanded name for the engine name. For example, if the
EngineName
parameter is "aurora," this value would be "Amazon Aurora MySQL."Username
— (String
)The user name used to connect to the endpoint.
ServerName
— (String
)The name of the server at the endpoint.
Port
— (Integer
)The port value used to access the endpoint.
DatabaseName
— (String
)The name of the database at the endpoint.
ExtraConnectionAttributes
— (String
)Additional connection attributes used to connect to the endpoint.
Status
— (String
)The status of the endpoint.
KmsKeyId
— (String
)An KMS key identifier that is used to encrypt the connection parameters for the endpoint.
If you don't specify a value for the
KmsKeyId
parameter, then DMS uses your default encryption key.KMS creates the default encryption key for your Amazon Web Services account. Your Amazon Web Services account has a different default encryption key for each Amazon Web Services Region.
EndpointArn
— (String
)The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
CertificateArn
— (String
)The Amazon Resource Name (ARN) used for SSL connection to the endpoint.
SslMode
— (String
)The SSL mode used to connect to the endpoint. The default value is
Possible values include:none
."none"
"require"
"verify-ca"
"verify-full"
ServiceAccessRoleArn
— (String
)The Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the
iam:PassRole
action.ExternalTableDefinition
— (String
)The external table definition.
ExternalId
— (String
)Value returned by a call to CreateEndpoint that can be used for cross-account validation. Use it on a subsequent call to CreateEndpoint to create the endpoint with a cross-account.
DynamoDbSettings
— (map
)The settings for the DynamoDB target endpoint. For more information, see the
DynamoDBSettings
structure.ServiceAccessRoleArn
— required — (String
)The Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the
iam:PassRole
action.
S3Settings
— (map
)The settings for the S3 target endpoint. For more information, see the
S3Settings
structure.ServiceAccessRoleArn
— (String
)The Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the
iam:PassRole
action. It is a required parameter that enables DMS to write and read objects from an S3 bucket.ExternalTableDefinition
— (String
)Specifies how tables are defined in the S3 source files only.
CsvRowDelimiter
— (String
)The delimiter used to separate rows in the .csv file for both source and target. The default is a carriage return (
\n
).CsvDelimiter
— (String
)The delimiter used to separate columns in the .csv file for both source and target. The default is a comma.
BucketFolder
— (String
)An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path
bucketFolder/schema_name/table_name/
. If this parameter isn't specified, then the path used isschema_name/table_name/
.BucketName
— (String
)The name of the S3 bucket.
CompressionType
— (String
)An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed. This parameter applies to both .csv and .parquet file formats.
Possible values include:"none"
"gzip"
EncryptionMode
— (String
)The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either
SSE_S3
(the default) orSSE_KMS
.Note: For theModifyEndpoint
operation, you can change the existing value of theEncryptionMode
parameter fromSSE_KMS
toSSE_S3
. But you can’t change the existing value fromSSE_S3
toSSE_KMS
.To use
SSE_S3
, you need an Identity and Access Management (IAM) role with permission to allow"arn:aws:s3:::dms-*"
to use the following actions:-
s3:CreateBucket
-
s3:ListBucket
-
s3:DeleteBucket
-
s3:GetBucketLocation
-
s3:GetObject
-
s3:PutObject
-
s3:DeleteObject
-
s3:GetObjectVersion
-
s3:GetBucketPolicy
-
s3:PutBucketPolicy
-
s3:DeleteBucketPolicy
"sse-s3"
"sse-kms"
-
ServerSideEncryptionKmsKeyId
— (String
)If you are using
SSE_KMS
for theEncryptionMode
, provide the KMS key ID. The key that you use needs an attached policy that enables Identity and Access Management (IAM) user permissions and allows use of the key.Here is a CLI example:
aws dms create-endpoint --endpoint-identifier value --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=value,BucketFolder=value,BucketName=value,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=value
DataFormat
— (String
)The format of the data that you want to use for output. You can choose one of the following:
-
csv
: This is a row-based file format with comma-separated values (.csv). -
parquet
: Apache Parquet (.parquet) is a columnar storage file format that features efficient compression and provides faster query response.
"csv"
"parquet"
-
EncodingType
— (String
)The type of encoding you are using:
-
RLE_DICTIONARY
uses a combination of bit-packing and run-length encoding to store repeated values more efficiently. This is the default. -
PLAIN
doesn't use encoding at all. Values are stored as they are. -
PLAIN_DICTIONARY
builds a dictionary of the values encountered in a given column. The dictionary is stored in a dictionary page for each column chunk.
"plain"
"plain-dictionary"
"rle-dictionary"
-
DictPageSizeLimit
— (Integer
)The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of
PLAIN
. This parameter defaults to 1024 * 1024 bytes (1 MiB), the maximum size of a dictionary page before it reverts toPLAIN
encoding. This size is used for .parquet file format only.RowGroupLength
— (Integer
)The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used for .parquet file format only.
If you choose a value larger than the maximum,
RowGroupLength
is set to the max row group length in bytes (64 * 1024 * 1024).DataPageSize
— (Integer
)The size of one data page in bytes. This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is used for .parquet file format only.
ParquetVersion
— (String
)The version of the Apache Parquet format that you want to use:
Possible values include:parquet_1_0
(the default) orparquet_2_0
."parquet-1-0"
"parquet-2-0"
EnableStatistics
— (Boolean
)A value that enables statistics for Parquet pages and row groups. Choose
true
to enable statistics,false
to disable. Statistics includeNULL
,DISTINCT
,MAX
, andMIN
values. This parameter defaults totrue
. This value is used for .parquet file format only.IncludeOpForFullLoad
— (Boolean
)A value that enables a full load to write INSERT operations to the comma-separated value (.csv) output files only to indicate how the rows were added to the source database.
Note: DMS supports theIncludeOpForFullLoad
parameter in versions 3.1.4 and later.For full load, records can only be inserted. By default (the
false
setting), no information is recorded in these output files for a full load to indicate that the rows were inserted at the source database. IfIncludeOpForFullLoad
is set totrue
ory
, the INSERT is recorded as an I annotation in the first field of the .csv file. This allows the format of your target records from a full load to be consistent with the target records from a CDC load.Note: This setting works together with theCdcInsertsOnly
and theCdcInsertsAndUpdates
parameters for output to .csv files only. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..CdcInsertsOnly
— (Boolean
)A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage (.parquet) output files. By default (the
false
setting), the first field in a .csv or .parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was inserted, updated, or deleted at the source database for a CDC load to the target.If
CdcInsertsOnly
is set totrue
ory
, only INSERTs from the source database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded depends on the value ofIncludeOpForFullLoad
. IfIncludeOpForFullLoad
is set totrue
, the first field of every CDC record is set to I to indicate the INSERT operation at the source. IfIncludeOpForFullLoad
is set tofalse
, every CDC record is written without a first field to indicate the INSERT operation at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..Note: DMS supports the interaction described preceding between theCdcInsertsOnly
andIncludeOpForFullLoad
parameters in versions 3.1.4 and later.CdcInsertsOnly
andCdcInsertsAndUpdates
can't both be set totrue
for the same endpoint. Set eitherCdcInsertsOnly
orCdcInsertsAndUpdates
totrue
for the same endpoint, but not both.TimestampColumnName
— (String
)A value that when nonblank causes DMS to add a column with timestamp information to the endpoint data for an Amazon S3 target.
Note: DMS supports theTimestampColumnName
parameter in versions 3.1.4 and later.DMS includes an additional
STRING
column in the .csv or .parquet object files of your migrated data when you setTimestampColumnName
to a nonblank value.For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS.
For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.
The string format for this timestamp column value is
yyyy-MM-dd HH:mm:ss.SSSSSS
. By default, the precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on the commit timestamp supported by DMS for the source database.When the
AddColumnName
parameter is set totrue
, DMS also includes a name for the timestamp column that you set withTimestampColumnName
.ParquetTimestampInMillisecond
— (Boolean
)A value that specifies the precision of any
TIMESTAMP
column values that are written to an Amazon S3 object file in .parquet format.Note: DMS supports theParquetTimestampInMillisecond
parameter in versions 3.1.4 and later.When
ParquetTimestampInMillisecond
is set totrue
ory
, DMS writes allTIMESTAMP
columns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes them with microsecond precision.Currently, Amazon Athena and Glue can handle only millisecond precision for
TIMESTAMP
values. Set this parameter totrue
for S3 endpoint object files that are .parquet formatted only if you plan to query or process the data with Athena or Glue.Note: DMS writes anyTIMESTAMP
column values written to an S3 file in .csv format with microsecond precision. SettingParquetTimestampInMillisecond
has no effect on the string format of the timestamp column value that is inserted by setting theTimestampColumnName
parameter.CdcInsertsAndUpdates
— (Boolean
)A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet (columnar storage) output files. The default setting is
false
, but whenCdcInsertsAndUpdates
is set totrue
ory
, only INSERTs and UPDATEs from the source database are migrated to the .csv or .parquet file.For .csv file format only, how these INSERTs and UPDATEs are recorded depends on the value of the
IncludeOpForFullLoad
parameter. IfIncludeOpForFullLoad
is set totrue
, the first field of every CDC record is set to eitherI
orU
to indicate INSERT and UPDATE operations at the source. But ifIncludeOpForFullLoad
is set tofalse
, CDC records are written without an indication of INSERT or UPDATE operations at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..Note: DMS supports the use of theCdcInsertsAndUpdates
parameter in versions 3.3.1 and later.CdcInsertsOnly
andCdcInsertsAndUpdates
can't both be set totrue
for the same endpoint. Set eitherCdcInsertsOnly
orCdcInsertsAndUpdates
totrue
for the same endpoint, but not both.DatePartitionEnabled
— (Boolean
)When set to
true
, this parameter partitions S3 bucket folders based on transaction commit dates. The default value isfalse
. For more information about date-based folder partitioning, see Using date-based folder partitioning.DatePartitionSequence
— (String
)Identifies the sequence of the date format to use during folder partitioning. The default value is
Possible values include:YYYYMMDD
. Use this parameter whenDatePartitionedEnabled
is set totrue
."YYYYMMDD"
"YYYYMMDDHH"
"YYYYMM"
"MMYYYYDD"
"DDMMYYYY"
DatePartitionDelimiter
— (String
)Specifies a date separating delimiter to use during folder partitioning. The default value is
Possible values include:SLASH
. Use this parameter whenDatePartitionedEnabled
is set totrue
."SLASH"
"UNDERSCORE"
"DASH"
"NONE"
UseCsvNoSupValue
— (Boolean
)This setting applies if the S3 output files during a change data capture (CDC) load are written in .csv format. If set to
true
for columns not included in the supplemental log, DMS uses the value specified byCsvNoSupValue
. If not set or set tofalse
, DMS uses the null value for these columns.Note: This setting is supported in DMS versions 3.4.1 and later.CsvNoSupValue
— (String
)This setting only applies if your Amazon S3 output files during a change data capture (CDC) load are written in .csv format. If
UseCsvNoSupValue
is set to true, specify a string value that you want DMS to use for all columns not included in the supplemental log. If you do not specify a string value, DMS uses the null value for these columns regardless of theUseCsvNoSupValue
setting.Note: This setting is supported in DMS versions 3.4.1 and later.PreserveTransactions
— (Boolean
)If set to
true
, DMS saves the transaction order for a change data capture (CDC) load on the Amazon S3 target specified byCdcPath
. For more information, see Capturing data changes (CDC) including transaction order on the S3 target.Note: This setting is supported in DMS versions 3.4.2 and later.CdcPath
— (String
)Specifies the folder path of CDC files. For an S3 source, this setting is required if a task captures change data; otherwise, it's optional. If
CdcPath
is set, DMS reads CDC files from this path and replicates the data changes to the target endpoint. For an S3 target if you setPreserveTransactions
totrue
, DMS verifies that you have set this parameter to a folder path on your S3 target where DMS can save the transaction order for the CDC load. DMS creates this CDC folder path in either your S3 target working directory or the S3 target location specified byBucketFolder
andBucketName
.For example, if you specify
CdcPath
asMyChangedData
, and you specifyBucketName
asMyTargetBucket
but do not specifyBucketFolder
, DMS creates the CDC folder path following:MyTargetBucket/MyChangedData
.If you specify the same
CdcPath
, and you specifyBucketName
asMyTargetBucket
andBucketFolder
asMyTargetData
, DMS creates the CDC folder path following:MyTargetBucket/MyTargetData/MyChangedData
.For more information on CDC including transaction order on an S3 target, see Capturing data changes (CDC) including transaction order on the S3 target.
Note: This setting is supported in DMS versions 3.4.2 and later.CannedAclForObjects
— (String
)A value that enables DMS to specify a predefined (canned) access control list for objects created in an Amazon S3 bucket as .csv or .parquet files. For more information about Amazon S3 canned ACLs, see Canned ACL in the Amazon S3 Developer Guide.
The default value is NONE. Valid values include NONE, PRIVATE, PUBLIC_READ, PUBLIC_READ_WRITE, AUTHENTICATED_READ, AWS_EXEC_READ, BUCKET_OWNER_READ, and BUCKET_OWNER_FULL_CONTROL.
Possible values include:"none"
"private"
"public-read"
"public-read-write"
"authenticated-read"
"aws-exec-read"
"bucket-owner-read"
"bucket-owner-full-control"
AddColumnName
— (Boolean
)An optional parameter that, when set to
true
ory
, you can use to add column name information to the .csv output file.The default value is
false
. Valid values aretrue
,false
,y
, andn
.CdcMaxBatchInterval
— (Integer
)Maximum length of the interval, defined in seconds, after which to output a file to Amazon S3.
When
CdcMaxBatchInterval
andCdcMinFileSize
are both specified, the file write is triggered by whichever parameter condition is met first within an DMS CloudFormation template.The default value is 60 seconds.
CdcMinFileSize
— (Integer
)Minimum file size, defined in megabytes, to reach for a file output to Amazon S3.
When
CdcMinFileSize
andCdcMaxBatchInterval
are both specified, the file write is triggered by whichever parameter condition is met first within an DMS CloudFormation template.The default value is 32 MB.
CsvNullValue
— (String
)An optional parameter that specifies how DMS treats null values. While handling the null value, you can use this parameter to pass a user-defined string as null when writing to the target. For example, when target columns are not nullable, you can use this option to differentiate between the empty string value and the null value. So, if you set this parameter value to the empty string ("" or ''), DMS treats the empty string as the null value instead of
NULL
.The default value is
NULL
. Valid values include any valid string.IgnoreHeaderRows
— (Integer
)When this value is set to 1, DMS ignores the first row header in a .csv file. A value of 1 turns on the feature; a value of 0 turns off the feature.
The default is 0.
MaxFileSize
— (Integer
)A value that specifies the maximum size (in KB) of any .csv file to be created while migrating to an S3 target during full load.
The default value is 1,048,576 KB (1 GB). Valid values include 1 to 1,048,576.
Rfc4180
— (Boolean
)For an S3 source, when this value is set to
true
ory
, each leading double quotation mark has to be followed by an ending double quotation mark. This formatting complies with RFC 4180. When this value is set tofalse
orn
, string literals are copied to the target as is. In this case, a delimiter (row or column) signals the end of the field. Thus, you can't use a delimiter as part of the string, because it signals the end of the value.For an S3 target, an optional parameter used to set behavior to comply with RFC 4180 for data migrated to Amazon S3 using .csv file format only. When this value is set to
true
ory
using Amazon S3 as a target, if the data has quotation marks or newline characters in it, DMS encloses the entire column with an additional pair of double quotation marks ("). Every quotation mark within the data is repeated twice.The default value is
true
. Valid values includetrue
,false
,y
, andn
.
DmsTransferSettings
— (map
)The settings in JSON format for the DMS transfer type of source endpoint.
Possible settings include the following:
-
ServiceAccessRoleArn
- - The Amazon Resource Name (ARN) used by the service access IAM role. The role must allow theiam:PassRole
action. -
BucketName
- The name of the S3 bucket to use.
Shorthand syntax for these settings is as follows:
ServiceAccessRoleArn=string,BucketName=string,
JSON syntax for these settings is as follows:
{ "ServiceAccessRoleArn": "string", "BucketName": "string"}
ServiceAccessRoleArn
— (String
)The Amazon Resource Name (ARN) used by the service access IAM role. The role must allow the
iam:PassRole
action.BucketName
— (String
)The name of the S3 bucket to use.
-
MongoDbSettings
— (map
)The settings for the MongoDB source endpoint. For more information, see the
MongoDbSettings
structure.Username
— (String
)The user name you use to access the MongoDB source endpoint.
Password
— (String
)The password for the user account you use to access the MongoDB source endpoint.
ServerName
— (String
)The name of the server on the MongoDB source endpoint.
Port
— (Integer
)The port value for the MongoDB source endpoint.
DatabaseName
— (String
)The database name on the MongoDB source endpoint.
AuthType
— (String
)The authentication type you use to access the MongoDB source endpoint.
When when set to
Possible values include:"no"
, user name and password parameters are not used and can be empty."no"
"password"
AuthMechanism
— (String
)The authentication mechanism you use to access the MongoDB source endpoint.
For the default value, in MongoDB version 2.x,
Possible values include:"default"
is"mongodb_cr"
. For MongoDB version 3.x or later,"default"
is"scram_sha_1"
. This setting isn't used whenAuthType
is set to"no"
."default"
"mongodb_cr"
"scram_sha_1"
NestingLevel
— (String
)Specifies either document or table mode.
Default value is
Possible values include:"none"
. Specify"none"
to use document mode. Specify"one"
to use table mode."none"
"one"
ExtractDocId
— (String
)Specifies the document ID. Use this setting when
NestingLevel
is set to"none"
.Default value is
"false"
.DocsToInvestigate
— (String
)Indicates the number of documents to preview to determine the document organization. Use this setting when
NestingLevel
is set to"one"
.Must be a positive value greater than
0
. Default value is1000
.AuthSource
— (String
)The MongoDB database name. This setting isn't used when
AuthType
is set to"no"
.The default is
"admin"
.KmsKeyId
— (String
)The KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the
KmsKeyId
parameter, then DMS uses your default encryption key. KMS creates the default encryption key for your Amazon Web Services account. Your Amazon Web Services account has a different default encryption key for each Amazon Web Services Region.SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the MongoDB endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the MongoDB endpoint connection details.
KinesisSettings
— (map
)The settings for the Amazon Kinesis target endpoint. For more information, see the
KinesisSettings
structure.StreamArn
— (String
)The Amazon Resource Name (ARN) for the Amazon Kinesis Data Streams endpoint.
MessageFormat
— (String
)The output format for the records created on the endpoint. The message format is
Possible values include:JSON
(default) orJSON_UNFORMATTED
(a single line with no tab)."json"
"json-unformatted"
ServiceAccessRoleArn
— (String
)The Amazon Resource Name (ARN) for the IAM role that DMS uses to write to the Kinesis data stream. The role must allow the
iam:PassRole
action.IncludeTransactionDetails
— (Boolean
)Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for
transaction_id
, previoustransaction_id
, andtransaction_record_id
(the record offset within a transaction). The default isfalse
.IncludePartitionValue
— (Boolean
)Shows the partition value within the Kinesis message output, unless the partition type is
schema-table-type
. The default isfalse
.PartitionIncludeSchemaTable
— (Boolean
)Prefixes schema and table names to partition values, when the partition type is
primary-key-type
. Doing this increases data distribution among Kinesis shards. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same shard, which causes throttling. The default isfalse
.IncludeTableAlterOperations
— (Boolean
)Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table
,drop-table
,add-column
,drop-column
, andrename-column
. The default isfalse
.IncludeControlDetails
— (Boolean
)Shows detailed control information for table definition, column definition, and table and column changes in the Kinesis message output. The default is
false
.IncludeNullAndEmpty
— (Boolean
)Include NULL and empty columns for records migrated to the endpoint. The default is
false
.NoHexPrefix
— (Boolean
)Set this optional parameter to
true
to avoid adding a '0x' prefix to raw data in hexadecimal format. For example, by default, DMS adds a '0x' prefix to the LOB column type in hexadecimal format moving from an Oracle source to an Amazon Kinesis target. Use theNoHexPrefix
endpoint setting to enable migration of RAW data type columns without adding the '0x' prefix.
KafkaSettings
— (map
)The settings for the Apache Kafka target endpoint. For more information, see the
KafkaSettings
structure.Broker
— (String
)A comma-separated list of one or more broker locations in your Kafka cluster that host your Kafka instance. Specify each broker location in the form
broker-hostname-or-ip:port
. For example,"ec2-12-345-678-901.compute-1.amazonaws.com:2345"
. For more information and examples of specifying a list of broker locations, see Using Apache Kafka as a target for Database Migration Service in the Database Migration Service User Guide.Topic
— (String
)The topic to which you migrate the data. If you don't specify a topic, DMS specifies
"kafka-default-topic"
as the migration topic.MessageFormat
— (String
)The output format for the records created on the endpoint. The message format is
Possible values include:JSON
(default) orJSON_UNFORMATTED
(a single line with no tab)."json"
"json-unformatted"
IncludeTransactionDetails
— (Boolean
)Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for
transaction_id
, previoustransaction_id
, andtransaction_record_id
(the record offset within a transaction). The default isfalse
.IncludePartitionValue
— (Boolean
)Shows the partition value within the Kafka message output unless the partition type is
schema-table-type
. The default isfalse
.PartitionIncludeSchemaTable
— (Boolean
)Prefixes schema and table names to partition values, when the partition type is
primary-key-type
. Doing this increases data distribution among Kafka partitions. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same partition, which causes throttling. The default isfalse
.IncludeTableAlterOperations
— (Boolean
)Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table
,drop-table
,add-column
,drop-column
, andrename-column
. The default isfalse
.IncludeControlDetails
— (Boolean
)Shows detailed control information for table definition, column definition, and table and column changes in the Kafka message output. The default is
false
.MessageMaxBytes
— (Integer
)The maximum size in bytes for records created on the endpoint The default is 1,000,000.
IncludeNullAndEmpty
— (Boolean
)Include NULL and empty columns for records migrated to the endpoint. The default is
false
.SecurityProtocol
— (String
)Set secure connection to a Kafka target endpoint using Transport Layer Security (TLS). Options include
Possible values include:ssl-encryption
,ssl-authentication
, andsasl-ssl
.sasl-ssl
requiresSaslUsername
andSaslPassword
."plaintext"
"ssl-authentication"
"ssl-encryption"
"sasl-ssl"
SslClientCertificateArn
— (String
)The Amazon Resource Name (ARN) of the client certificate used to securely connect to a Kafka target endpoint.
SslClientKeyArn
— (String
)The Amazon Resource Name (ARN) for the client private key used to securely connect to a Kafka target endpoint.
SslClientKeyPassword
— (String
)The password for the client private key used to securely connect to a Kafka target endpoint.
SslCaCertificateArn
— (String
)The Amazon Resource Name (ARN) for the private certificate authority (CA) cert that DMS uses to securely connect to your Kafka target endpoint.
SaslUsername
— (String
)The secure user name you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.
SaslPassword
— (String
)The secure password you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.
NoHexPrefix
— (Boolean
)Set this optional parameter to
true
to avoid adding a '0x' prefix to raw data in hexadecimal format. For example, by default, DMS adds a '0x' prefix to the LOB column type in hexadecimal format moving from an Oracle source to a Kafka target. Use theNoHexPrefix
endpoint setting to enable migration of RAW data type columns without adding the '0x' prefix.
ElasticsearchSettings
— (map
)The settings for the Elasticsearch source endpoint. For more information, see the
ElasticsearchSettings
structure.ServiceAccessRoleArn
— required — (String
)The Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the
iam:PassRole
action.EndpointUri
— required — (String
)The endpoint for the Elasticsearch cluster. DMS uses HTTPS if a transport protocol (http/https) is not specified.
FullLoadErrorPercentage
— (Integer
)The maximum percentage of records that can fail to be written before a full load operation stops.
To avoid early failure, this counter is only effective after 1000 records are transferred. Elasticsearch also has the concept of error monitoring during the last 10 minutes of an Observation Window. If transfer of all records fail in the last 10 minutes, the full load operation stops.
ErrorRetryDuration
— (Integer
)The maximum number of seconds for which DMS retries failed API requests to the Elasticsearch cluster.
NeptuneSettings
— (map
)The settings for the Amazon Neptune target endpoint. For more information, see the
NeptuneSettings
structure.ServiceAccessRoleArn
— (String
)The Amazon Resource Name (ARN) of the service role that you created for the Neptune target endpoint. The role must allow the
iam:PassRole
action. For more information, see Creating an IAM Service Role for Accessing Amazon Neptune as a Target in the Database Migration Service User Guide.S3BucketName
— required — (String
)The name of the Amazon S3 bucket where DMS can temporarily store migrated graph data in .csv files before bulk-loading it to the Neptune target database. DMS maps the SQL source data to graph data before storing it in these .csv files.
S3BucketFolder
— required — (String
)A folder path where you want DMS to store migrated graph data in the S3 bucket specified by
S3BucketName
ErrorRetryDuration
— (Integer
)The number of milliseconds for DMS to wait to retry a bulk-load of migrated graph data to the Neptune target database before raising an error. The default is 250.
MaxFileSize
— (Integer
)The maximum size in kilobytes of migrated graph data stored in a .csv file before DMS bulk-loads the data to the Neptune target database. The default is 1,048,576 KB. If the bulk load is successful, DMS clears the bucket, ready to store the next batch of migrated graph data.
MaxRetryCount
— (Integer
)The number of times for DMS to retry a bulk load of migrated graph data to the Neptune target database before raising an error. The default is 5.
IamAuthEnabled
— (Boolean
)If you want Identity and Access Management (IAM) authorization enabled for this endpoint, set this parameter to
true
. Then attach the appropriate IAM policy document to your service role specified byServiceAccessRoleArn
. The default isfalse
.
RedshiftSettings
— (map
)Settings for the Amazon Redshift endpoint.
AcceptAnyDate
— (Boolean
)A value that indicates to allow any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error. You can choose
true
orfalse
(the default).This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data doesn't match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.
AfterConnectScript
— (String
)Code to run after connecting. This parameter should contain the code itself, not the name of a file containing the code.
BucketFolder
— (String
)An S3 folder where the comma-separated-value (.csv) files are stored before being uploaded to the target Redshift cluster.
For full load mode, DMS converts source records into .csv files and loads them to the BucketFolder/TableID path. DMS uses the Redshift
COPY
command to upload the .csv files to the target table. The files are deleted once theCOPY
operation has finished. For more information, see COPY in the Amazon Redshift Database Developer Guide.For change-data-capture (CDC) mode, DMS creates a NetChanges table, and loads the .csv files to this BucketFolder/NetChangesTableID path.
BucketName
— (String
)The name of the intermediate S3 bucket used to store .csv files before uploading data to Redshift.
CaseSensitiveNames
— (Boolean
)If Amazon Redshift is configured to support case sensitive schema names, set
CaseSensitiveNames
totrue
. The default isfalse
.CompUpdate
— (Boolean
)If you set
CompUpdate
totrue
Amazon Redshift applies automatic compression if the table is empty. This applies even if the table columns already have encodings other thanRAW
. If you setCompUpdate
tofalse
, automatic compression is disabled and existing column encodings aren't changed. The default istrue
.ConnectionTimeout
— (Integer
)A value that sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.
DatabaseName
— (String
)The name of the Amazon Redshift data warehouse (service) that you are working with.
DateFormat
— (String
)The date format that you are using. Valid values are
auto
(case-sensitive), your date format string enclosed in quotes, or NULL. If this parameter is left unset (NULL), it defaults to a format of 'YYYY-MM-DD'. Usingauto
recognizes most strings, even some that aren't supported when you use a date format string.If your date and time values use formats different from each other, set this to
auto
.EmptyAsNull
— (Boolean
)A value that specifies whether DMS should migrate empty CHAR and VARCHAR fields as NULL. A value of
true
sets empty CHAR and VARCHAR fields to null. The default isfalse
.EncryptionMode
— (String
)The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either
SSE_S3
(the default) orSSE_KMS
.Note: For theModifyEndpoint
operation, you can change the existing value of theEncryptionMode
parameter fromSSE_KMS
toSSE_S3
. But you can’t change the existing value fromSSE_S3
toSSE_KMS
.To use
Possible values include:SSE_S3
, create an Identity and Access Management (IAM) role with a policy that allows"arn:aws:s3:::*"
to use the following actions:"s3:PutObject", "s3:ListBucket"
"sse-s3"
"sse-kms"
ExplicitIds
— (Boolean
)This setting is only valid for a full-load migration task. Set
ExplicitIds
totrue
to have tables withIDENTITY
columns override their auto-generated values with explicit values loaded from the source data files used to populate the tables. The default isfalse
.FileTransferUploadStreams
— (Integer
)The number of threads used to upload a single file. This parameter accepts a value from 1 through 64. It defaults to 10.
The number of parallel streams used to upload a single .csv file to an S3 bucket using S3 Multipart Upload. For more information, see Multipart upload overview.
FileTransferUploadStreams
accepts a value from 1 through 64. It defaults to 10.LoadTimeout
— (Integer
)The amount of time to wait (in milliseconds) before timing out of operations performed by DMS on a Redshift cluster, such as Redshift COPY, INSERT, DELETE, and UPDATE.
MaxFileSize
— (Integer
)The maximum size (in KB) of any .csv file used to load data on an S3 bucket and transfer data to Amazon Redshift. It defaults to 1048576KB (1 GB).
Password
— (String
)The password for the user named in the
username
property.Port
— (Integer
)The port number for Amazon Redshift. The default value is 5439.
RemoveQuotes
— (Boolean
)A value that specifies to remove surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained. Choose
true
to remove quotation marks. The default isfalse
.ReplaceInvalidChars
— (String
)A list of characters that you want to replace. Use with
ReplaceChars
.ReplaceChars
— (String
)A value that specifies to replaces the invalid characters specified in
ReplaceInvalidChars
, substituting the specified characters instead. The default is"?"
.ServerName
— (String
)The name of the Amazon Redshift cluster you are using.
ServiceAccessRoleArn
— (String
)The Amazon Resource Name (ARN) of the IAM role that has access to the Amazon Redshift service. The role must allow the
iam:PassRole
action.ServerSideEncryptionKmsKeyId
— (String
)The KMS key ID. If you are using
SSE_KMS
for theEncryptionMode
, provide this key ID. The key that you use needs an attached policy that enables IAM user permissions and allows use of the key.TimeFormat
— (String
)The time format that you want to use. Valid values are
auto
(case-sensitive),'timeformat_string'
,'epochsecs'
, or'epochmillisecs'
. It defaults to 10. Usingauto
recognizes most strings, even some that aren't supported when you use a time format string.If your date and time values use formats different from each other, set this parameter to
auto
.TrimBlanks
— (Boolean
)A value that specifies to remove the trailing white space characters from a VARCHAR string. This parameter applies only to columns with a VARCHAR data type. Choose
true
to remove unneeded white space. The default isfalse
.TruncateColumns
— (Boolean
)A value that specifies to truncate data in columns to the appropriate number of characters, so that the data fits in the column. This parameter applies only to columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. Choose
true
to truncate data. The default isfalse
.Username
— (String
)An Amazon Redshift user name for a registered user.
WriteBufferSize
— (Integer
)The size (in KB) of the in-memory file write buffer used when generating .csv files on the local disk at the DMS replication instance. The default value is 1000 (buffer size is 1000KB).
SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the Amazon Redshift endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the Amazon Redshift endpoint connection details.
PostgreSQLSettings
— (map
)The settings for the PostgreSQL source and target endpoint. For more information, see the
PostgreSQLSettings
structure.AfterConnectScript
— (String
)For use with change data capture (CDC) only, this attribute has DMS bypass foreign keys and user triggers to reduce the time it takes to bulk load data.
Example:
afterConnectScript=SET session_replication_role='replica'
CaptureDdls
— (Boolean
)To capture DDL events, DMS creates various artifacts in the PostgreSQL database when the task starts. You can later remove these artifacts.
If this value is set to
N
, you don't have to create tables or triggers on the source database.MaxFileSize
— (Integer
)Specifies the maximum size (in KB) of any .csv file used to transfer data to PostgreSQL.
Example:
maxFileSize=512
DatabaseName
— (String
)Database name for the endpoint.
DdlArtifactsSchema
— (String
)The schema in which the operational DDL database artifacts are created.
Example:
ddlArtifactsSchema=xyzddlschema;
ExecuteTimeout
— (Integer
)Sets the client statement timeout for the PostgreSQL instance, in seconds. The default value is 60 seconds.
Example:
executeTimeout=100;
FailTasksOnLobTruncation
— (Boolean
)When set to
true
, this value causes a task to fail if the actual size of a LOB column is greater than the specifiedLobMaxSize
.If task is set to Limited LOB mode and this option is set to true, the task fails instead of truncating the LOB data.
HeartbeatEnable
— (Boolean
)The write-ahead log (WAL) heartbeat feature mimics a dummy transaction. By doing this, it prevents idle logical replication slots from holding onto old WAL logs, which can result in storage full situations on the source. This heartbeat keeps
restart_lsn
moving and prevents storage full scenarios.HeartbeatSchema
— (String
)Sets the schema in which the heartbeat artifacts are created.
HeartbeatFrequency
— (Integer
)Sets the WAL heartbeat frequency (in minutes).
Password
— (String
)Endpoint connection password.
Port
— (Integer
)Endpoint TCP port.
ServerName
— (String
)Fully qualified domain name of the endpoint.
Username
— (String
)Endpoint connection user name.
SlotName
— (String
)Sets the name of a previously created logical replication slot for a change data capture (CDC) load of the PostgreSQL source instance.
When used with the
CdcStartPosition
request parameter for the DMS API , this attribute also makes it possible to use native CDC start points. DMS verifies that the specified logical replication slot exists before starting the CDC load task. It also verifies that the task was created with a valid setting ofCdcStartPosition
. If the specified slot doesn't exist or the task doesn't have a validCdcStartPosition
setting, DMS raises an error.For more information about setting the
CdcStartPosition
request parameter, see Determining a CDC native start point in the Database Migration Service User Guide. For more information about usingCdcStartPosition
, see CreateReplicationTask, StartReplicationTask, and ModifyReplicationTask.PluginName
— (String
)Specifies the plugin to use to create a replication slot.
Possible values include:"no-preference"
"test-decoding"
"pglogical"
SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the PostgreSQL endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the PostgreSQL endpoint connection details.
MySQLSettings
— (map
)The settings for the MySQL source and target endpoint. For more information, see the
MySQLSettings
structure.AfterConnectScript
— (String
)Specifies a script to run immediately after DMS connects to the endpoint. The migration task continues running regardless if the SQL statement succeeds or fails.
For this parameter, provide the code of the script itself, not the name of a file containing the script.
CleanSourceMetadataOnMismatch
— (Boolean
)Adjusts the behavior of DMS when migrating from an SQL Server source database that is hosted as part of an Always On availability group cluster. If you need DMS to poll all the nodes in the Always On cluster for transaction backups, set this attribute to
false
.DatabaseName
— (String
)Database name for the endpoint. For a MySQL source or target endpoint, don't explicitly specify the database using the
DatabaseName
request parameter on either theCreateEndpoint
orModifyEndpoint
API call. SpecifyingDatabaseName
when you create or modify a MySQL endpoint replicates all the task tables to this single database. For MySQL endpoints, you specify the database only when you specify the schema in the table-mapping rules of the DMS task.EventsPollInterval
— (Integer
)Specifies how often to check the binary log for new changes/events when the database is idle.
Example:
eventsPollInterval=5;
In the example, DMS checks for changes in the binary logs every five seconds.
TargetDbType
— (String
)Specifies where to migrate source tables on the target, either to a single database or multiple databases.
Example:
Possible values include:targetDbType=MULTIPLE_DATABASES
"specific-database"
"multiple-databases"
MaxFileSize
— (Integer
)Specifies the maximum size (in KB) of any .csv file used to transfer data to a MySQL-compatible database.
Example:
maxFileSize=512
ParallelLoadThreads
— (Integer
)Improves performance when loading data into the MySQL-compatible target database. Specifies how many threads to use to load the data into the MySQL-compatible target database. Setting a large number of threads can have an adverse effect on database performance, because a separate connection is required for each thread.
Example:
parallelLoadThreads=1
Password
— (String
)Endpoint connection password.
Port
— (Integer
)Endpoint TCP port.
ServerName
— (String
)Fully qualified domain name of the endpoint.
ServerTimezone
— (String
)Specifies the time zone for the source MySQL database.
Example:
serverTimezone=US/Pacific;
Note: Do not enclose time zones in single quotes.
Username
— (String
)Endpoint connection user name.
SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the MySQL endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the MySQL endpoint connection details.
OracleSettings
— (map
)The settings for the Oracle source and target endpoint. For more information, see the
OracleSettings
structure.AddSupplementalLogging
— (Boolean
)Set this attribute to set up table-level supplemental logging for the Oracle database. This attribute enables PRIMARY KEY supplemental logging on all tables selected for a migration task.
If you use this option, you still need to enable database-level supplemental logging.
ArchivedLogDestId
— (Integer
)Specifies the ID of the destination for the archived redo logs. This value should be the same as a number in the dest_id column of the v$archived_log view. If you work with an additional redo log destination, use the
AdditionalArchivedLogDestId
option to specify the additional destination ID. Doing this improves performance by ensuring that the correct logs are accessed from the outset.AdditionalArchivedLogDestId
— (Integer
)Set this attribute with
ArchivedLogDestId
in a primary/ standby setup. This attribute is useful in the case of a switchover. In this case, DMS needs to know which destination to get archive redo logs from to read changes. This need arises because the previous primary instance is now a standby instance after switchover.Although DMS supports the use of the Oracle
RESETLOGS
option to open the database, never useRESETLOGS
unless necessary. For additional information aboutRESETLOGS
, see RMAN Data Repair Concepts in the Oracle Database Backup and Recovery User's Guide.ExtraArchivedLogDestIds
— (Array<Integer>
)Specifies the IDs of one more destinations for one or more archived redo logs. These IDs are the values of the
dest_id
column in thev$archived_log
view. Use this setting with thearchivedLogDestId
extra connection attribute in a primary-to-single setup or a primary-to-multiple-standby setup.This setting is useful in a switchover when you use an Oracle Data Guard database as a source. In this case, DMS needs information about what destination to get archive redo logs from to read changes. DMS needs this because after the switchover the previous primary is a standby instance. For example, in a primary-to-single standby setup you might apply the following settings.
archivedLogDestId=1; ExtraArchivedLogDestIds=[2]
In a primary-to-multiple-standby setup, you might apply the following settings.
archivedLogDestId=1; ExtraArchivedLogDestIds=[2,3,4]
Although DMS supports the use of the Oracle
RESETLOGS
option to open the database, never useRESETLOGS
unless it's necessary. For more information aboutRESETLOGS
, see RMAN Data Repair Concepts in the Oracle Database Backup and Recovery User's Guide.AllowSelectNestedTables
— (Boolean
)Set this attribute to
true
to enable replication of Oracle tables containing columns that are nested tables or defined types.ParallelAsmReadThreads
— (Integer
)Set this attribute to change the number of threads that DMS configures to perform a change data capture (CDC) load using Oracle Automatic Storage Management (ASM). You can specify an integer value between 2 (the default) and 8 (the maximum). Use this attribute together with the
readAheadBlocks
attribute.ReadAheadBlocks
— (Integer
)Set this attribute to change the number of read-ahead blocks that DMS configures to perform a change data capture (CDC) load using Oracle Automatic Storage Management (ASM). You can specify an integer value between 1000 (the default) and 200,000 (the maximum).
AccessAlternateDirectly
— (Boolean
)Set this attribute to
false
in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This tells the DMS instance to not access redo logs through any specified path prefix replacement using direct file access.UseAlternateFolderForOnline
— (Boolean
)Set this attribute to
true
in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This tells the DMS instance to use any specified prefix replacement to access all online redo logs.OraclePathPrefix
— (String
)Set this string attribute to the required value in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This value specifies the default Oracle root used to access the redo logs.
UsePathPrefix
— (String
)Set this string attribute to the required value in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This value specifies the path prefix used to replace the default Oracle root to access the redo logs.
ReplacePathPrefix
— (Boolean
)Set this attribute to true in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This setting tells DMS instance to replace the default Oracle root with the specified
usePathPrefix
setting to access the redo logs.EnableHomogenousTablespace
— (Boolean
)Set this attribute to enable homogenous tablespace replication and create existing tables or indexes under the same tablespace on the target.
DirectPathNoLog
— (Boolean
)When set to
true
, this attribute helps to increase the commit rate on the Oracle target database by writing directly to tables and not writing a trail to database logs.ArchivedLogsOnly
— (Boolean
)When this field is set to
Y
, DMS only accesses the archived redo logs. If the archived redo logs are stored on Oracle ASM only, the DMS user account needs to be granted ASM privileges.AsmPassword
— (String
)For an Oracle source endpoint, your Oracle Automatic Storage Management (ASM) password. You can set this value from the
asm_user_password
value. You set this value as part of the comma-separated value that you set to thePassword
request parameter when you create the endpoint to access transaction logs using Binary Reader. For more information, see Configuration for change data capture (CDC) on an Oracle source database.AsmServer
— (String
)For an Oracle source endpoint, your ASM server address. You can set this value from the
asm_server
value. You setasm_server
as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database.AsmUser
— (String
)For an Oracle source endpoint, your ASM user name. You can set this value from the
asm_user
value. You setasm_user
as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database.CharLengthSemantics
— (String
)Specifies whether the length of a character column is in bytes or in characters. To indicate that the character column length is in characters, set this attribute to
CHAR
. Otherwise, the character column length is in bytes.Example:
Possible values include:charLengthSemantics=CHAR;
"default"
"char"
"byte"
DatabaseName
— (String
)Database name for the endpoint.
DirectPathParallelLoad
— (Boolean
)When set to
true
, this attribute specifies a parallel load whenuseDirectPathFullLoad
is set toY
. This attribute also only applies when you use the DMS parallel load feature. Note that the target table cannot have any constraints or indexes.FailTasksOnLobTruncation
— (Boolean
)When set to
true
, this attribute causes a task to fail if the actual size of an LOB column is greater than the specifiedLobMaxSize
.If a task is set to limited LOB mode and this option is set to
true
, the task fails instead of truncating the LOB data.NumberDatatypeScale
— (Integer
)Specifies the number scale. You can select a scale up to 38, or you can select FLOAT. By default, the NUMBER data type is converted to precision 38, scale 10.
Example:
numberDataTypeScale=12
Password
— (String
)Endpoint connection password.
Port
— (Integer
)Endpoint TCP port.
ReadTableSpaceName
— (Boolean
)When set to
true
, this attribute supports tablespace replication.RetryInterval
— (Integer
)Specifies the number of seconds that the system waits before resending a query.
Example:
retryInterval=6;
SecurityDbEncryption
— (String
)For an Oracle source endpoint, the transparent data encryption (TDE) password required by AWM DMS to access Oracle redo logs encrypted by TDE using Binary Reader. It is also the
TDE_Password
part of the comma-separated value you set to thePassword
request parameter when you create the endpoint. TheSecurityDbEncryptian
setting is related to thisSecurityDbEncryptionName
setting. For more information, see Supported encryption methods for using Oracle as a source for DMS in the Database Migration Service User Guide.SecurityDbEncryptionName
— (String
)For an Oracle source endpoint, the name of a key used for the transparent data encryption (TDE) of the columns and tablespaces in an Oracle source database that is encrypted using TDE. The key value is the value of the
SecurityDbEncryption
setting. For more information on setting the key name value ofSecurityDbEncryptionName
, see the information and example for setting thesecurityDbEncryptionName
extra connection attribute in Supported encryption methods for using Oracle as a source for DMS in the Database Migration Service User Guide.ServerName
— (String
)Fully qualified domain name of the endpoint.
SpatialDataOptionToGeoJsonFunctionName
— (String
)Use this attribute to convert
SDO_GEOMETRY
toGEOJSON
format. By default, DMS calls theSDO2GEOJSON
custom function if present and accessible. Or you can create your own custom function that mimics the operation ofSDOGEOJSON
and setSpatialDataOptionToGeoJsonFunctionName
to call it instead.StandbyDelayTime
— (Integer
)Use this attribute to specify a time in minutes for the delay in standby sync. If the source is an Oracle Active Data Guard standby database, use this attribute to specify the time lag between primary and standby databases.
In DMS, you can create an Oracle CDC task that uses an Active Data Guard standby instance as a source for replicating ongoing changes. Doing this eliminates the need to connect to an active database that might be in production.
Username
— (String
)Endpoint connection user name.
UseBFile
— (Boolean
)Set this attribute to Y to capture change data using the Binary Reader utility. Set
UseLogminerReader
to N to set this attribute to Y. To use Binary Reader with Amazon RDS for Oracle as the source, you set additional attributes. For more information about using this setting with Oracle Automatic Storage Management (ASM), see Using Oracle LogMiner or DMS Binary Reader for CDC.UseDirectPathFullLoad
— (Boolean
)Set this attribute to Y to have DMS use a direct path full load. Specify this value to use the direct path protocol in the Oracle Call Interface (OCI). By using this OCI protocol, you can bulk-load Oracle target tables during a full load.
UseLogminerReader
— (Boolean
)Set this attribute to Y to capture change data using the Oracle LogMiner utility (the default). Set this attribute to N if you want to access the redo logs as a binary file. When you set
UseLogminerReader
to N, also setUseBfile
to Y. For more information on this setting and using Oracle ASM, see Using Oracle LogMiner or DMS Binary Reader for CDC in the DMS User Guide.SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the Oracle endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the Oracle endpoint connection details.SecretsManagerOracleAsmAccessRoleArn
— (String
)Required only if your Oracle endpoint uses Advanced Storage Manager (ASM). The full ARN of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the
SecretsManagerOracleAsmSecret
. ThisSecretsManagerOracleAsmSecret
has the secret value that allows access to the Oracle ASM of the endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerOracleAsmSecretId
. Or you can specify clear-text values forAsmUserName
,AsmPassword
, andAsmServerName
. You can't specify both. For more information on creating thisSecretsManagerOracleAsmSecret
and theSecretsManagerOracleAsmAccessRoleArn
andSecretsManagerOracleAsmSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerOracleAsmSecretId
— (String
)Required only if your Oracle endpoint uses Advanced Storage Manager (ASM). The full ARN, partial ARN, or friendly name of the
SecretsManagerOracleAsmSecret
that contains the Oracle ASM connection details for the Oracle endpoint.
SybaseSettings
— (map
)The settings for the SAP ASE source and target endpoint. For more information, see the
SybaseSettings
structure.DatabaseName
— (String
)Database name for the endpoint.
Password
— (String
)Endpoint connection password.
Port
— (Integer
)Endpoint TCP port.
ServerName
— (String
)Fully qualified domain name of the endpoint.
Username
— (String
)Endpoint connection user name.
SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the SAP ASE endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the SAP SAE endpoint connection details.
MicrosoftSQLServerSettings
— (map
)The settings for the Microsoft SQL Server source and target endpoint. For more information, see the
MicrosoftSQLServerSettings
structure.Port
— (Integer
)Endpoint TCP port.
BcpPacketSize
— (Integer
)The maximum size of the packets (in bytes) used to transfer data using BCP.
DatabaseName
— (String
)Database name for the endpoint.
ControlTablesFileGroup
— (String
)Specifies a file group for the DMS internal tables. When the replication task starts, all the internal DMS control tables (awsdms_ apply_exception, awsdms_apply, awsdms_changes) are created for the specified file group.
Password
— (String
)Endpoint connection password.
QuerySingleAlwaysOnNode
— (Boolean
)Cleans and recreates table metadata information on the replication instance when a mismatch occurs. An example is a situation where running an alter DDL statement on a table might result in different information about the table cached in the replication instance.
ReadBackupOnly
— (Boolean
)When this attribute is set to
Y
, DMS only reads changes from transaction log backups and doesn't read from the active transaction log file during ongoing replication. Setting this parameter toY
enables you to control active transaction log file growth during full load and ongoing replication tasks. However, it can add some source latency to ongoing replication.SafeguardPolicy
— (String
)Use this attribute to minimize the need to access the backup log and enable DMS to prevent truncation using one of the following two methods.
Start transactions in the database: This is the default method. When this method is used, DMS prevents TLOG truncation by mimicking a transaction in the database. As long as such a transaction is open, changes that appear after the transaction started aren't truncated. If you need Microsoft Replication to be enabled in your database, then you must choose this method.
Exclusively use sp_repldone within a single task: When this method is used, DMS reads the changes and then uses sp_repldone to mark the TLOG transactions as ready for truncation. Although this method doesn't involve any transactional activities, it can only be used when Microsoft Replication isn't running. Also, when using this method, only one DMS task can access the database at any given time. Therefore, if you need to run parallel DMS tasks against the same database, use the default method.
Possible values include:"rely-on-sql-server-replication-agent"
"exclusive-automatic-truncation"
"shared-automatic-truncation"
ServerName
— (String
)Fully qualified domain name of the endpoint.
Username
— (String
)Endpoint connection user name.
UseBcpFullLoad
— (Boolean
)Use this to attribute to transfer data for full-load operations using BCP. When the target table contains an identity column that does not exist in the source table, you must disable the use BCP for loading table option.
UseThirdPartyBackupDevice
— (Boolean
)When this attribute is set to
Y
, DMS processes third-party transaction log backups if they are created in native format.SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the SQL Server endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the SQL Server endpoint connection details.
IBMDb2Settings
— (map
)The settings for the IBM Db2 LUW source endpoint. For more information, see the
IBMDb2Settings
structure.DatabaseName
— (String
)Database name for the endpoint.
Password
— (String
)Endpoint connection password.
Port
— (Integer
)Endpoint TCP port. The default value is 50000.
ServerName
— (String
)Fully qualified domain name of the endpoint.
SetDataCaptureChanges
— (Boolean
)Enables ongoing replication (CDC) as a BOOLEAN value. The default is true.
CurrentLsn
— (String
)For ongoing replication (CDC), use CurrentLSN to specify a log sequence number (LSN) where you want the replication to start.
MaxKBytesPerRead
— (Integer
)Maximum number of bytes per read, as a NUMBER value. The default is 64 KB.
Username
— (String
)Endpoint connection user name.
SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the Db2 LUW endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the Db2 LUW endpoint connection details.
DocDbSettings
— (map
)Provides information that defines a DocumentDB endpoint.
Username
— (String
)The user name you use to access the DocumentDB source endpoint.
Password
— (String
)The password for the user account you use to access the DocumentDB source endpoint.
ServerName
— (String
)The name of the server on the DocumentDB source endpoint.
Port
— (Integer
)The port value for the DocumentDB source endpoint.
DatabaseName
— (String
)The database name on the DocumentDB source endpoint.
NestingLevel
— (String
)Specifies either document or table mode.
Default value is
Possible values include:"none"
. Specify"none"
to use document mode. Specify"one"
to use table mode."none"
"one"
ExtractDocId
— (Boolean
)Specifies the document ID. Use this setting when
NestingLevel
is set to"none"
.Default value is
"false"
.DocsToInvestigate
— (Integer
)Indicates the number of documents to preview to determine the document organization. Use this setting when
NestingLevel
is set to"one"
.Must be a positive value greater than
0
. Default value is1000
.KmsKeyId
— (String
)The KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the
KmsKeyId
parameter, then DMS uses your default encryption key. KMS creates the default encryption key for your Amazon Web Services account. Your Amazon Web Services account has a different default encryption key for each Amazon Web Services Region.SecretsManagerAccessRoleArn
— (String
)The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the DocumentDB endpoint.Note: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.SecretsManagerSecretId
— (String
)The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the DocumentDB endpoint connection details.
RedisSettings
— (map
)The settings for the Redis target endpoint. For more information, see the
RedisSettings
structure.ServerName
— required — (String
)Fully qualified domain name of the endpoint.
Port
— required — (Integer
)Transmission Control Protocol (TCP) port for the endpoint.
SslSecurityProtocol
— (String
)The connection to a Redis target endpoint using Transport Layer Security (TLS). Valid values include
plaintext
andssl-encryption
. The default isssl-encryption
. Thessl-encryption
option makes an encrypted connection. Optionally, you can identify an Amazon Resource Name (ARN) for an SSL certificate authority (CA) using theSslCaCertificateArn
setting. If an ARN isn't given for a CA, DMS uses the Amazon root CA.The
Possible values include:plaintext
option doesn't provide Transport Layer Security (TLS) encryption for traffic between endpoint and database."plaintext"
"ssl-encryption"
AuthType
— (String
)The type of authentication to perform when connecting to a Redis target. Options include
Possible values include:none
,auth-token
, andauth-role
. Theauth-token
option requires anAuthPassword
value to be provided. Theauth-role
option requiresAuthUserName
andAuthPassword
values to be provided."none"
"auth-role"
"auth-token"
AuthUserName
— (String
)The user name provided with the
auth-role
option of theAuthType
setting for a Redis target endpoint.AuthPassword
— (String
)The password provided with the
auth-role
andauth-token
options of theAuthType
setting for a Redis target endpoint.SslCaCertificateArn
— (String
)The Amazon Resource Name (ARN) for the certificate authority (CA) that DMS uses to connect to your Redis target endpoint.
-
(AWS.Response)
—
Returns:
See Also:
dms.waitFor('replicationInstanceAvailable', params = {}, [callback]) ⇒ AWS.Request
Waits for the
replicationInstanceAvailable
state by periodically calling the underlying DMS.describeReplicationInstances() operation every 60 seconds (at most 60 times).Examples:
Waiting for the replicationInstanceAvailable state
var params = { // ... input parameters ... }; dms.waitFor('replicationInstanceAvailable', params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
—
Filters
— (Array<map>
)Filters applied to replication instances.
Valid filter names: replication-instance-arn | replication-instance-id | replication-instance-class | engine-version
Name
— required — (String
)The name of the filter as specified for a
Describe*
or similar operation.Values
— required — (Array<String>
)The filter value, which can specify one or more values used to narrow the returned results.
MaxRecords
— (Integer
)The maximum number of records to include in the response. If more records exist than the specified
MaxRecords
value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.Default: 100
Constraints: Minimum 20, maximum 100.
Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.ReplicationInstances
— (Array<map>
)The replication instances described.
ReplicationInstanceIdentifier
— (String
)The replication instance identifier is a required parameter. This parameter is stored as a lowercase string.
Constraints:
-
Must contain 1-63 alphanumeric characters or hyphens.
-
First character must be a letter.
-
Cannot end with a hyphen or contain two consecutive hyphens.
Example:
myrepinstance
-
ReplicationInstanceClass
— (String
)The compute and memory capacity of the replication instance as defined for the specified replication instance class. It is a required parameter, although a default value is pre-selected in the DMS console.
For more information on the settings and capacities for the available replication instance classes, see Selecting the right DMS replication instance for your migration.
ReplicationInstanceStatus
— (String
)The status of the replication instance. The possible return values include:
-
"available"
-
"creating"
-
"deleted"
-
"deleting"
-
"failed"
-
"modifying"
-
"upgrading"
-
"rebooting"
-
"resetting-master-credentials"
-
"storage-full"
-
"incompatible-credentials"
-
"incompatible-network"
-
"maintenance"
-
AllocatedStorage
— (Integer
)The amount of storage (in gigabytes) that is allocated for the replication instance.
InstanceCreateTime
— (Date
)The time the replication instance was created.
VpcSecurityGroups
— (Array<map>
)The VPC security group for the instance.
VpcSecurityGroupId
— (String
)The VPC security group ID.
Status
— (String
)The status of the VPC security group.
AvailabilityZone
— (String
)The Availability Zone for the instance.
ReplicationSubnetGroup
— (map
)The subnet group for the replication instance.
ReplicationSubnetGroupIdentifier
— (String
)The identifier of the replication instance subnet group.
ReplicationSubnetGroupDescription
— (String
)A description for the replication subnet group.
VpcId
— (String
)The ID of the VPC.
SubnetGroupStatus
— (String
)The status of the subnet group.
Subnets
— (Array<map>
)The subnets that are in the subnet group.
SubnetIdentifier
— (String
)The subnet identifier.
SubnetAvailabilityZone
— (map
)The Availability Zone of the subnet.
Name
— (String
)The name of the Availability Zone.
SubnetStatus
— (String
)The status of the subnet.
PreferredMaintenanceWindow
— (String
)The maintenance window times for the replication instance. Any pending upgrades to the replication instance are performed during this time.
PendingModifiedValues
— (map
)The pending modification values.
ReplicationInstanceClass
— (String
)The compute and memory capacity of the replication instance as defined for the specified replication instance class.
For more information on the settings and capacities for the available replication instance classes, see Selecting the right DMS replication instance for your migration.
AllocatedStorage
— (Integer
)The amount of storage (in gigabytes) that is allocated for the replication instance.
MultiAZ
— (Boolean
)Specifies whether the replication instance is a Multi-AZ deployment. You can't set the
AvailabilityZone
parameter if the Multi-AZ parameter is set totrue
.EngineVersion
— (String
)The engine version number of the replication instance.
MultiAZ
— (Boolean
)Specifies whether the replication instance is a Multi-AZ deployment. You can't set the
AvailabilityZone
parameter if the Multi-AZ parameter is set totrue
.EngineVersion
— (String
)The engine version number of the replication instance.
If an engine version number is not specified when a replication instance is created, the default is the latest engine version available.
When modifying a major engine version of an instance, also set
AllowMajorVersionUpgrade
totrue
.AutoMinorVersionUpgrade
— (Boolean
)Boolean value indicating if minor version upgrades will be automatically applied to the instance.
KmsKeyId
— (String
)An KMS key identifier that is used to encrypt the data on the replication instance.
If you don't specify a value for the
KmsKeyId
parameter, then DMS uses your default encryption key.KMS creates the default encryption key for your Amazon Web Services account. Your Amazon Web Services account has a different default encryption key for each Amazon Web Services Region.
ReplicationInstanceArn
— (String
)The Amazon Resource Name (ARN) of the replication instance.
ReplicationInstancePublicIpAddress
— (String
)The public IP address of the replication instance.
ReplicationInstancePrivateIpAddress
— (String
)The private IP address of the replication instance.
ReplicationInstancePublicIpAddresses
— (Array<String>
)One or more public IP addresses for the replication instance.
ReplicationInstancePrivateIpAddresses
— (Array<String>
)One or more private IP addresses for the replication instance.
PubliclyAccessible
— (Boolean
)Specifies the accessibility options for the replication instance. A value of
true
represents an instance with a public IP address. A value offalse
represents an instance with a private IP address. The default value istrue
.SecondaryAvailabilityZone
— (String
)The Availability Zone of the standby replication instance in a Multi-AZ deployment.
FreeUntil
— (Date
)The expiration date of the free replication instance that is part of the Free DMS program.
DnsNameServers
— (String
)The DNS name servers supported for the replication instance to access your on-premise source or target database.
-
(AWS.Response)
—
Returns:
See Also:
dms.waitFor('replicationInstanceDeleted', params = {}, [callback]) ⇒ AWS.Request
Waits for the
replicationInstanceDeleted
state by periodically calling the underlying DMS.describeReplicationInstances() operation every 15 seconds (at most 60 times).Examples:
Waiting for the replicationInstanceDeleted state
var params = { // ... input parameters ... }; dms.waitFor('replicationInstanceDeleted', params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
—
Filters
— (Array<map>
)Filters applied to replication instances.
Valid filter names: replication-instance-arn | replication-instance-id | replication-instance-class | engine-version
Name
— required — (String
)The name of the filter as specified for a
Describe*
or similar operation.Values
— required — (Array<String>
)The filter value, which can specify one or more values used to narrow the returned results.
MaxRecords
— (Integer
)The maximum number of records to include in the response. If more records exist than the specified
MaxRecords
value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.Default: 100
Constraints: Minimum 20, maximum 100.
Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.ReplicationInstances
— (Array<map>
)The replication instances described.
ReplicationInstanceIdentifier
— (String
)The replication instance identifier is a required parameter. This parameter is stored as a lowercase string.
Constraints:
-
Must contain 1-63 alphanumeric characters or hyphens.
-
First character must be a letter.
-
Cannot end with a hyphen or contain two consecutive hyphens.
Example:
myrepinstance
-
ReplicationInstanceClass
— (String
)The compute and memory capacity of the replication instance as defined for the specified replication instance class. It is a required parameter, although a default value is pre-selected in the DMS console.
For more information on the settings and capacities for the available replication instance classes, see Selecting the right DMS replication instance for your migration.
ReplicationInstanceStatus
— (String
)The status of the replication instance. The possible return values include:
-
"available"
-
"creating"
-
"deleted"
-
"deleting"
-
"failed"
-
"modifying"
-
"upgrading"
-
"rebooting"
-
"resetting-master-credentials"
-
"storage-full"
-
"incompatible-credentials"
-
"incompatible-network"
-
"maintenance"
-
AllocatedStorage
— (Integer
)The amount of storage (in gigabytes) that is allocated for the replication instance.
InstanceCreateTime
— (Date
)The time the replication instance was created.
VpcSecurityGroups
— (Array<map>
)The VPC security group for the instance.
VpcSecurityGroupId
— (String
)The VPC security group ID.
Status
— (String
)The status of the VPC security group.
AvailabilityZone
— (String
)The Availability Zone for the instance.
ReplicationSubnetGroup
— (map
)The subnet group for the replication instance.
ReplicationSubnetGroupIdentifier
— (String
)The identifier of the replication instance subnet group.
ReplicationSubnetGroupDescription
— (String
)A description for the replication subnet group.
VpcId
— (String
)The ID of the VPC.
SubnetGroupStatus
— (String
)The status of the subnet group.
Subnets
— (Array<map>
)The subnets that are in the subnet group.
SubnetIdentifier
— (String
)The subnet identifier.
SubnetAvailabilityZone
— (map
)The Availability Zone of the subnet.
Name
— (String
)The name of the Availability Zone.
SubnetStatus
— (String
)The status of the subnet.
PreferredMaintenanceWindow
— (String
)The maintenance window times for the replication instance. Any pending upgrades to the replication instance are performed during this time.
PendingModifiedValues
— (map
)The pending modification values.
ReplicationInstanceClass
— (String
)The compute and memory capacity of the replication instance as defined for the specified replication instance class.
For more information on the settings and capacities for the available replication instance classes, see Selecting the right DMS replication instance for your migration.
AllocatedStorage
— (Integer
)The amount of storage (in gigabytes) that is allocated for the replication instance.
MultiAZ
— (Boolean
)Specifies whether the replication instance is a Multi-AZ deployment. You can't set the
AvailabilityZone
parameter if the Multi-AZ parameter is set totrue
.EngineVersion
— (String
)The engine version number of the replication instance.
MultiAZ
— (Boolean
)Specifies whether the replication instance is a Multi-AZ deployment. You can't set the
AvailabilityZone
parameter if the Multi-AZ parameter is set totrue
.EngineVersion
— (String
)The engine version number of the replication instance.
If an engine version number is not specified when a replication instance is created, the default is the latest engine version available.
When modifying a major engine version of an instance, also set
AllowMajorVersionUpgrade
totrue
.AutoMinorVersionUpgrade
— (Boolean
)Boolean value indicating if minor version upgrades will be automatically applied to the instance.
KmsKeyId
— (String
)An KMS key identifier that is used to encrypt the data on the replication instance.
If you don't specify a value for the
KmsKeyId
parameter, then DMS uses your default encryption key.KMS creates the default encryption key for your Amazon Web Services account. Your Amazon Web Services account has a different default encryption key for each Amazon Web Services Region.
ReplicationInstanceArn
— (String
)The Amazon Resource Name (ARN) of the replication instance.
ReplicationInstancePublicIpAddress
— (String
)The public IP address of the replication instance.
ReplicationInstancePrivateIpAddress
— (String
)The private IP address of the replication instance.
ReplicationInstancePublicIpAddresses
— (Array<String>
)One or more public IP addresses for the replication instance.
ReplicationInstancePrivateIpAddresses
— (Array<String>
)One or more private IP addresses for the replication instance.
PubliclyAccessible
— (Boolean
)Specifies the accessibility options for the replication instance. A value of
true
represents an instance with a public IP address. A value offalse
represents an instance with a private IP address. The default value istrue
.SecondaryAvailabilityZone
— (String
)The Availability Zone of the standby replication instance in a Multi-AZ deployment.
FreeUntil
— (Date
)The expiration date of the free replication instance that is part of the Free DMS program.
DnsNameServers
— (String
)The DNS name servers supported for the replication instance to access your on-premise source or target database.
-
(AWS.Response)
—
Returns:
See Also:
dms.waitFor('replicationTaskReady', params = {}, [callback]) ⇒ AWS.Request
Waits for the
replicationTaskReady
state by periodically calling the underlying DMS.describeReplicationTasks() operation every 15 seconds (at most 60 times).Examples:
Waiting for the replicationTaskReady state
var params = { // ... input parameters ... }; dms.waitFor('replicationTaskReady', params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
—
Filters
— (Array<map>
)Filters applied to replication tasks.
Valid filter names: replication-task-arn | replication-task-id | migration-type | endpoint-arn | replication-instance-arn
Name
— required — (String
)The name of the filter as specified for a
Describe*
or similar operation.Values
— required — (Array<String>
)The filter value, which can specify one or more values used to narrow the returned results.
MaxRecords
— (Integer
)The maximum number of records to include in the response. If more records exist than the specified
MaxRecords
value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.Default: 100
Constraints: Minimum 20, maximum 100.
Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.WithoutSettings
— (Boolean
)An option to set to avoid returning information about settings. Use this to reduce overhead when setting information is too large. To use this option, choose
true
; otherwise, choosefalse
(the default).
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.ReplicationTasks
— (Array<map>
)A description of the replication tasks.
ReplicationTaskIdentifier
— (String
)The user-assigned replication task identifier or name.
Constraints:
-
Must contain 1-255 alphanumeric characters or hyphens.
-
First character must be a letter.
-
Cannot end with a hyphen or contain two consecutive hyphens.
-
SourceEndpointArn
— (String
)The Amazon Resource Name (ARN) that uniquely identifies the endpoint.
TargetEndpointArn
— (String
)The ARN that uniquely identifies the endpoint.
ReplicationInstanceArn
— (String
)The ARN of the replication instance.
MigrationType
— (String
)The type of migration.
Possible values include:"full-load"
"cdc"
"full-load-and-cdc"
TableMappings
— (String
)Table mappings specified in the task.
ReplicationTaskSettings
— (String
)The settings for the replication task.
Status
— (String
)The status of the replication task. This response parameter can return one of the following values:
-
"moving"
– The task is being moved in response to running theMoveReplicationTask
operation. -
"creating"
– The task is being created in response to running theCreateReplicationTask
operation. -
"deleting"
– The task is being deleted in response to running theDeleteReplicationTask
operation. -
"failed"
– The task failed to successfully complete the database migration in response to running theStartReplicationTask
operation. -
"failed-move"
– The task failed to move in response to running theMoveReplicationTask
operation. -
"modifying"
– The task definition is being modified in response to running theModifyReplicationTask
operation. -
"ready"
– The task is in aready
state where it can respond to other task operations, such asStartReplicationTask
orDeleteReplicationTask
. -
"running"
– The task is performing a database migration in response to running theStartReplicationTask
operation. -
"starting"
– The task is preparing to perform a database migration in response to running theStartReplicationTask
operation. -
"stopped"
– The task has stopped in response to running theStopReplicationTask
operation. -
"stopping"
– The task is preparing to stop in response to running theStopReplicationTask
operation. -
"testing"
– The database migration specified for this task is being tested in response to running either theStartReplicationTaskAssessmentRun
or theStartReplicationTaskAssessment
operation.Note:StartReplicationTaskAssessmentRun
is an improved premigration task assessment operation. TheStartReplicationTaskAssessment
operation assesses data type compatibility only between the source and target database of a given migration task. In contrast,StartReplicationTaskAssessmentRun
enables you to specify a variety of premigration task assessments in addition to data type compatibility. These assessments include ones for the validity of primary key definitions and likely issues with database migration performance, among others.
-
LastFailureMessage
— (String
)The last error (failure) message generated for the replication task.
StopReason
— (String
)The reason the replication task was stopped. This response parameter can return one of the following values:
-
"STOP_REASON_FULL_LOAD_COMPLETED"
– Full-load migration completed. -
"STOP_REASON_CACHED_CHANGES_APPLIED"
– Change data capture (CDC) load completed. -
"STOP_REASON_CACHED_CHANGES_NOT_APPLIED"
– In a full-load and CDC migration, the full load stopped as specified before starting the CDC migration. -
"STOP_REASON_SERVER_TIME"
– The migration stopped at the specified server time.
-
ReplicationTaskCreationDate
— (Date
)The date the replication task was created.
ReplicationTaskStartDate
— (Date
)The date the replication task is scheduled to start.
CdcStartPosition
— (String
)Indicates when you want a change data capture (CDC) operation to start. Use either
CdcStartPosition
orCdcStartTime
to specify when you want the CDC operation to start. Specifying both values results in an error.The value can be in date, checkpoint, or LSN/SCN format.
Date Example: --cdc-start-position “2018-03-08T12:12:12”
Checkpoint Example: --cdc-start-position "checkpoint:V1#27#mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:1876#0#0#*#0#93"
LSN Example: --cdc-start-position “mysql-bin-changelog.000024:373”
CdcStopPosition
— (String
)Indicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.
Server time example: --cdc-stop-position “server_time:2018-02-09T12:12:12”
Commit time example: --cdc-stop-position “commit_time: 2018-02-09T12:12:12 “
RecoveryCheckpoint
— (String
)Indicates the last checkpoint that occurred during a change data capture (CDC) operation. You can provide this value to the
CdcStartPosition
parameter to start a CDC operation that begins at that checkpoint.ReplicationTaskArn
— (String
)The Amazon Resource Name (ARN) of the replication task.
ReplicationTaskStats
— (map
)The statistics for the task, including elapsed time, tables loaded, and table errors.
FullLoadProgressPercent
— (Integer
)The percent complete for the full load migration task.
ElapsedTimeMillis
— (Integer
)The elapsed time of the task, in milliseconds.
TablesLoaded
— (Integer
)The number of tables loaded for this task.
TablesLoading
— (Integer
)The number of tables currently loading for this task.
TablesQueued
— (Integer
)The number of tables queued for this task.
TablesErrored
— (Integer
)The number of errors that have occurred during this task.
FreshStartDate
— (Date
)The date the replication task was started either with a fresh start or a target reload.
StartDate
— (Date
)The date the replication task was started either with a fresh start or a resume. For more information, see StartReplicationTaskType.
StopDate
— (Date
)The date the replication task was stopped.
FullLoadStartDate
— (Date
)The date the replication task full load was started.
FullLoadFinishDate
— (Date
)The date the replication task full load was completed.
TaskData
— (String
)Supplemental information that the task requires to migrate the data for certain source and target endpoints. For more information, see Specifying Supplemental Data for Task Settings in the Database Migration Service User Guide.
TargetReplicationInstanceArn
— (String
)The ARN of the replication instance to which this task is moved in response to running the
MoveReplicationTask
operation. Otherwise, this response parameter isn't a member of theReplicationTask
object.
-
(AWS.Response)
—
Returns:
See Also:
dms.waitFor('replicationTaskStopped', params = {}, [callback]) ⇒ AWS.Request
Waits for the
replicationTaskStopped
state by periodically calling the underlying DMS.describeReplicationTasks() operation every 15 seconds (at most 60 times).Examples:
Waiting for the replicationTaskStopped state
var params = { // ... input parameters ... }; dms.waitFor('replicationTaskStopped', params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
—
Filters
— (Array<map>
)Filters applied to replication tasks.
Valid filter names: replication-task-arn | replication-task-id | migration-type | endpoint-arn | replication-instance-arn
Name
— required — (String
)The name of the filter as specified for a
Describe*
or similar operation.Values
— required — (Array<String>
)The filter value, which can specify one or more values used to narrow the returned results.
MaxRecords
— (Integer
)The maximum number of records to include in the response. If more records exist than the specified
MaxRecords
value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.Default: 100
Constraints: Minimum 20, maximum 100.
Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.WithoutSettings
— (Boolean
)An option to set to avoid returning information about settings. Use this to reduce overhead when setting information is too large. To use this option, choose
true
; otherwise, choosefalse
(the default).
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.ReplicationTasks
— (Array<map>
)A description of the replication tasks.
ReplicationTaskIdentifier
— (String
)The user-assigned replication task identifier or name.
Constraints:
-
Must contain 1-255 alphanumeric characters or hyphens.
-
First character must be a letter.
-
Cannot end with a hyphen or contain two consecutive hyphens.
-
SourceEndpointArn
— (String
)The Amazon Resource Name (ARN) that uniquely identifies the endpoint.
TargetEndpointArn
— (String
)The ARN that uniquely identifies the endpoint.
ReplicationInstanceArn
— (String
)The ARN of the replication instance.
MigrationType
— (String
)The type of migration.
Possible values include:"full-load"
"cdc"
"full-load-and-cdc"
TableMappings
— (String
)Table mappings specified in the task.
ReplicationTaskSettings
— (String
)The settings for the replication task.
Status
— (String
)The status of the replication task. This response parameter can return one of the following values:
-
"moving"
– The task is being moved in response to running theMoveReplicationTask
operation. -
"creating"
– The task is being created in response to running theCreateReplicationTask
operation. -
"deleting"
– The task is being deleted in response to running theDeleteReplicationTask
operation. -
"failed"
– The task failed to successfully complete the database migration in response to running theStartReplicationTask
operation. -
"failed-move"
– The task failed to move in response to running theMoveReplicationTask
operation. -
"modifying"
– The task definition is being modified in response to running theModifyReplicationTask
operation. -
"ready"
– The task is in aready
state where it can respond to other task operations, such asStartReplicationTask
orDeleteReplicationTask
. -
"running"
– The task is performing a database migration in response to running theStartReplicationTask
operation. -
"starting"
– The task is preparing to perform a database migration in response to running theStartReplicationTask
operation. -
"stopped"
– The task has stopped in response to running theStopReplicationTask
operation. -
"stopping"
– The task is preparing to stop in response to running theStopReplicationTask
operation. -
"testing"
– The database migration specified for this task is being tested in response to running either theStartReplicationTaskAssessmentRun
or theStartReplicationTaskAssessment
operation.Note:StartReplicationTaskAssessmentRun
is an improved premigration task assessment operation. TheStartReplicationTaskAssessment
operation assesses data type compatibility only between the source and target database of a given migration task. In contrast,StartReplicationTaskAssessmentRun
enables you to specify a variety of premigration task assessments in addition to data type compatibility. These assessments include ones for the validity of primary key definitions and likely issues with database migration performance, among others.
-
LastFailureMessage
— (String
)The last error (failure) message generated for the replication task.
StopReason
— (String
)The reason the replication task was stopped. This response parameter can return one of the following values:
-
"STOP_REASON_FULL_LOAD_COMPLETED"
– Full-load migration completed. -
"STOP_REASON_CACHED_CHANGES_APPLIED"
– Change data capture (CDC) load completed. -
"STOP_REASON_CACHED_CHANGES_NOT_APPLIED"
– In a full-load and CDC migration, the full load stopped as specified before starting the CDC migration. -
"STOP_REASON_SERVER_TIME"
– The migration stopped at the specified server time.
-
ReplicationTaskCreationDate
— (Date
)The date the replication task was created.
ReplicationTaskStartDate
— (Date
)The date the replication task is scheduled to start.
CdcStartPosition
— (String
)Indicates when you want a change data capture (CDC) operation to start. Use either
CdcStartPosition
orCdcStartTime
to specify when you want the CDC operation to start. Specifying both values results in an error.The value can be in date, checkpoint, or LSN/SCN format.
Date Example: --cdc-start-position “2018-03-08T12:12:12”
Checkpoint Example: --cdc-start-position "checkpoint:V1#27#mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:1876#0#0#*#0#93"
LSN Example: --cdc-start-position “mysql-bin-changelog.000024:373”
CdcStopPosition
— (String
)Indicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.
Server time example: --cdc-stop-position “server_time:2018-02-09T12:12:12”
Commit time example: --cdc-stop-position “commit_time: 2018-02-09T12:12:12 “
RecoveryCheckpoint
— (String
)Indicates the last checkpoint that occurred during a change data capture (CDC) operation. You can provide this value to the
CdcStartPosition
parameter to start a CDC operation that begins at that checkpoint.ReplicationTaskArn
— (String
)The Amazon Resource Name (ARN) of the replication task.
ReplicationTaskStats
— (map
)The statistics for the task, including elapsed time, tables loaded, and table errors.
FullLoadProgressPercent
— (Integer
)The percent complete for the full load migration task.
ElapsedTimeMillis
— (Integer
)The elapsed time of the task, in milliseconds.
TablesLoaded
— (Integer
)The number of tables loaded for this task.
TablesLoading
— (Integer
)The number of tables currently loading for this task.
TablesQueued
— (Integer
)The number of tables queued for this task.
TablesErrored
— (Integer
)The number of errors that have occurred during this task.
FreshStartDate
— (Date
)The date the replication task was started either with a fresh start or a target reload.
StartDate
— (Date
)The date the replication task was started either with a fresh start or a resume. For more information, see StartReplicationTaskType.
StopDate
— (Date
)The date the replication task was stopped.
FullLoadStartDate
— (Date
)The date the replication task full load was started.
FullLoadFinishDate
— (Date
)The date the replication task full load was completed.
TaskData
— (String
)Supplemental information that the task requires to migrate the data for certain source and target endpoints. For more information, see Specifying Supplemental Data for Task Settings in the Database Migration Service User Guide.
TargetReplicationInstanceArn
— (String
)The ARN of the replication instance to which this task is moved in response to running the
MoveReplicationTask
operation. Otherwise, this response parameter isn't a member of theReplicationTask
object.
-
(AWS.Response)
—
Returns:
See Also:
dms.waitFor('replicationTaskRunning', params = {}, [callback]) ⇒ AWS.Request
Waits for the
replicationTaskRunning
state by periodically calling the underlying DMS.describeReplicationTasks() operation every 15 seconds (at most 60 times).Examples:
Waiting for the replicationTaskRunning state
var params = { // ... input parameters ... }; dms.waitFor('replicationTaskRunning', params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
—
Filters
— (Array<map>
)Filters applied to replication tasks.
Valid filter names: replication-task-arn | replication-task-id | migration-type | endpoint-arn | replication-instance-arn
Name
— required — (String
)The name of the filter as specified for a
Describe*
or similar operation.Values
— required — (Array<String>
)The filter value, which can specify one or more values used to narrow the returned results.
MaxRecords
— (Integer
)The maximum number of records to include in the response. If more records exist than the specified
MaxRecords
value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.Default: 100
Constraints: Minimum 20, maximum 100.
Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.WithoutSettings
— (Boolean
)An option to set to avoid returning information about settings. Use this to reduce overhead when setting information is too large. To use this option, choose
true
; otherwise, choosefalse
(the default).
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.ReplicationTasks
— (Array<map>
)A description of the replication tasks.
ReplicationTaskIdentifier
— (String
)The user-assigned replication task identifier or name.
Constraints:
-
Must contain 1-255 alphanumeric characters or hyphens.
-
First character must be a letter.
-
Cannot end with a hyphen or contain two consecutive hyphens.
-
SourceEndpointArn
— (String
)The Amazon Resource Name (ARN) that uniquely identifies the endpoint.
TargetEndpointArn
— (String
)The ARN that uniquely identifies the endpoint.
ReplicationInstanceArn
— (String
)The ARN of the replication instance.
MigrationType
— (String
)The type of migration.
Possible values include:"full-load"
"cdc"
"full-load-and-cdc"
TableMappings
— (String
)Table mappings specified in the task.
ReplicationTaskSettings
— (String
)The settings for the replication task.
Status
— (String
)The status of the replication task. This response parameter can return one of the following values:
-
"moving"
– The task is being moved in response to running theMoveReplicationTask
operation. -
"creating"
– The task is being created in response to running theCreateReplicationTask
operation. -
"deleting"
– The task is being deleted in response to running theDeleteReplicationTask
operation. -
"failed"
– The task failed to successfully complete the database migration in response to running theStartReplicationTask
operation. -
"failed-move"
– The task failed to move in response to running theMoveReplicationTask
operation. -
"modifying"
– The task definition is being modified in response to running theModifyReplicationTask
operation. -
"ready"
– The task is in aready
state where it can respond to other task operations, such asStartReplicationTask
orDeleteReplicationTask
. -
"running"
– The task is performing a database migration in response to running theStartReplicationTask
operation. -
"starting"
– The task is preparing to perform a database migration in response to running theStartReplicationTask
operation. -
"stopped"
– The task has stopped in response to running theStopReplicationTask
operation. -
"stopping"
– The task is preparing to stop in response to running theStopReplicationTask
operation. -
"testing"
– The database migration specified for this task is being tested in response to running either theStartReplicationTaskAssessmentRun
or theStartReplicationTaskAssessment
operation.Note:StartReplicationTaskAssessmentRun
is an improved premigration task assessment operation. TheStartReplicationTaskAssessment
operation assesses data type compatibility only between the source and target database of a given migration task. In contrast,StartReplicationTaskAssessmentRun
enables you to specify a variety of premigration task assessments in addition to data type compatibility. These assessments include ones for the validity of primary key definitions and likely issues with database migration performance, among others.
-
LastFailureMessage
— (String
)The last error (failure) message generated for the replication task.
StopReason
— (String
)The reason the replication task was stopped. This response parameter can return one of the following values:
-
"STOP_REASON_FULL_LOAD_COMPLETED"
– Full-load migration completed. -
"STOP_REASON_CACHED_CHANGES_APPLIED"
– Change data capture (CDC) load completed. -
"STOP_REASON_CACHED_CHANGES_NOT_APPLIED"
– In a full-load and CDC migration, the full load stopped as specified before starting the CDC migration. -
"STOP_REASON_SERVER_TIME"
– The migration stopped at the specified server time.
-
ReplicationTaskCreationDate
— (Date
)The date the replication task was created.
ReplicationTaskStartDate
— (Date
)The date the replication task is scheduled to start.
CdcStartPosition
— (String
)Indicates when you want a change data capture (CDC) operation to start. Use either
CdcStartPosition
orCdcStartTime
to specify when you want the CDC operation to start. Specifying both values results in an error.The value can be in date, checkpoint, or LSN/SCN format.
Date Example: --cdc-start-position “2018-03-08T12:12:12”
Checkpoint Example: --cdc-start-position "checkpoint:V1#27#mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:1876#0#0#*#0#93"
LSN Example: --cdc-start-position “mysql-bin-changelog.000024:373”
CdcStopPosition
— (String
)Indicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.
Server time example: --cdc-stop-position “server_time:2018-02-09T12:12:12”
Commit time example: --cdc-stop-position “commit_time: 2018-02-09T12:12:12 “
RecoveryCheckpoint
— (String
)Indicates the last checkpoint that occurred during a change data capture (CDC) operation. You can provide this value to the
CdcStartPosition
parameter to start a CDC operation that begins at that checkpoint.ReplicationTaskArn
— (String
)The Amazon Resource Name (ARN) of the replication task.
ReplicationTaskStats
— (map
)The statistics for the task, including elapsed time, tables loaded, and table errors.
FullLoadProgressPercent
— (Integer
)The percent complete for the full load migration task.
ElapsedTimeMillis
— (Integer
)The elapsed time of the task, in milliseconds.
TablesLoaded
— (Integer
)The number of tables loaded for this task.
TablesLoading
— (Integer
)The number of tables currently loading for this task.
TablesQueued
— (Integer
)The number of tables queued for this task.
TablesErrored
— (Integer
)The number of errors that have occurred during this task.
FreshStartDate
— (Date
)The date the replication task was started either with a fresh start or a target reload.
StartDate
— (Date
)The date the replication task was started either with a fresh start or a resume. For more information, see StartReplicationTaskType.
StopDate
— (Date
)The date the replication task was stopped.
FullLoadStartDate
— (Date
)The date the replication task full load was started.
FullLoadFinishDate
— (Date
)The date the replication task full load was completed.
TaskData
— (String
)Supplemental information that the task requires to migrate the data for certain source and target endpoints. For more information, see Specifying Supplemental Data for Task Settings in the Database Migration Service User Guide.
TargetReplicationInstanceArn
— (String
)The ARN of the replication instance to which this task is moved in response to running the
MoveReplicationTask
operation. Otherwise, this response parameter isn't a member of theReplicationTask
object.
-
(AWS.Response)
—
Returns:
See Also:
dms.waitFor('replicationTaskDeleted', params = {}, [callback]) ⇒ AWS.Request
Waits for the
replicationTaskDeleted
state by periodically calling the underlying DMS.describeReplicationTasks() operation every 15 seconds (at most 60 times).Examples:
Waiting for the replicationTaskDeleted state
var params = { // ... input parameters ... }; dms.waitFor('replicationTaskDeleted', params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
Parameters:
-
params
(Object)
—
Filters
— (Array<map>
)Filters applied to replication tasks.
Valid filter names: replication-task-arn | replication-task-id | migration-type | endpoint-arn | replication-instance-arn
Name
— required — (String
)The name of the filter as specified for a
Describe*
or similar operation.Values
— required — (Array<String>
)The filter value, which can specify one or more values used to narrow the returned results.
MaxRecords
— (Integer
)The maximum number of records to include in the response. If more records exist than the specified
MaxRecords
value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.Default: 100
Constraints: Minimum 20, maximum 100.
Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.WithoutSettings
— (Boolean
)An option to set to avoid returning information about settings. Use this to reduce overhead when setting information is too large. To use this option, choose
true
; otherwise, choosefalse
(the default).
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:Marker
— (String
)An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by
MaxRecords
.ReplicationTasks
— (Array<map>
)A description of the replication tasks.
ReplicationTaskIdentifier
— (String
)The user-assigned replication task identifier or name.
Constraints:
-
Must contain 1-255 alphanumeric characters or hyphens.
-
First character must be a letter.
-
Cannot end with a hyphen or contain two consecutive hyphens.
-
SourceEndpointArn
— (String
)The Amazon Resource Name (ARN) that uniquely identifies the endpoint.
TargetEndpointArn
— (String
)The ARN that uniquely identifies the endpoint.
ReplicationInstanceArn
— (String
)The ARN of the replication instance.
MigrationType
— (String
)The type of migration.
Possible values include:"full-load"
"cdc"
"full-load-and-cdc"
TableMappings
— (String
)Table mappings specified in the task.
ReplicationTaskSettings
— (String
)The settings for the replication task.
Status
— (String
)The status of the replication task. This response parameter can return one of the following values:
-
"moving"
– The task is being moved in response to running theMoveReplicationTask
operation. -
"creating"
– The task is being created in response to running theCreateReplicationTask
operation. -
"deleting"
– The task is being deleted in response to running theDeleteReplicationTask
operation. -
"failed"
– The task failed to successfully complete the database migration in response to running theStartReplicationTask
operation. -
"failed-move"
– The task failed to move in response to running theMoveReplicationTask
operation. -
"modifying"
– The task definition is being modified in response to running theModifyReplicationTask
operation. -
"ready"
– The task is in aready
state where it can respond to other task operations, such asStartReplicationTask
orDeleteReplicationTask
. -
"running"
– The task is performing a database migration in response to running theStartReplicationTask
operation. -
"starting"
– The task is preparing to perform a database migration in response to running theStartReplicationTask
operation. -
"stopped"
– The task has stopped in response to running theStopReplicationTask
operation. -
"stopping"
– The task is preparing to stop in response to running theStopReplicationTask
operation. -
"testing"
– The database migration specified for this task is being tested in response to running either theStartReplicationTaskAssessmentRun
or theStartReplicationTaskAssessment
operation.Note:StartReplicationTaskAssessmentRun
is an improved premigration task assessment operation. TheStartReplicationTaskAssessment
operation assesses data type compatibility only between the source and target database of a given migration task. In contrast,StartReplicationTaskAssessmentRun
enables you to specify a variety of premigration task assessments in addition to data type compatibility. These assessments include ones for the validity of primary key definitions and likely issues with database migration performance, among others.
-
LastFailureMessage
— (String
)The last error (failure) message generated for the replication task.
StopReason
— (String
)The reason the replication task was stopped. This response parameter can return one of the following values:
-
"STOP_REASON_FULL_LOAD_COMPLETED"
– Full-load migration completed. -
"STOP_REASON_CACHED_CHANGES_APPLIED"
– Change data capture (CDC) load completed. -
"STOP_REASON_CACHED_CHANGES_NOT_APPLIED"
– In a full-load and CDC migration, the full load stopped as specified before starting the CDC migration. -
"STOP_REASON_SERVER_TIME"
– The migration stopped at the specified server time.
-
ReplicationTaskCreationDate
— (Date
)The date the replication task was created.
ReplicationTaskStartDate
— (Date
)The date the replication task is scheduled to start.
CdcStartPosition
— (String
)Indicates when you want a change data capture (CDC) operation to start. Use either
CdcStartPosition
orCdcStartTime
to specify when you want the CDC operation to start. Specifying both values results in an error.The value can be in date, checkpoint, or LSN/SCN format.
Date Example: --cdc-start-position “2018-03-08T12:12:12”
Checkpoint Example: --cdc-start-position "checkpoint:V1#27#mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:1876#0#0#*#0#93"
LSN Example: --cdc-start-position “mysql-bin-changelog.000024:373”
CdcStopPosition
— (String
)Indicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.
Server time example: --cdc-stop-position “server_time:2018-02-09T12:12:12”
Commit time example: --cdc-stop-position “commit_time: 2018-02-09T12:12:12 “
RecoveryCheckpoint
— (String
)Indicates the last checkpoint that occurred during a change data capture (CDC) operation. You can provide this value to the
CdcStartPosition
parameter to start a CDC operation that begins at that checkpoint.ReplicationTaskArn
— (String
)The Amazon Resource Name (ARN) of the replication task.
ReplicationTaskStats
— (map
)The statistics for the task, including elapsed time, tables loaded, and table errors.
FullLoadProgressPercent
— (Integer
)The percent complete for the full load migration task.
ElapsedTimeMillis
— (Integer
)The elapsed time of the task, in milliseconds.
TablesLoaded
— (Integer
)The number of tables loaded for this task.
TablesLoading
— (Integer
)The number of tables currently loading for this task.
TablesQueued
— (Integer
)The number of tables queued for this task.
TablesErrored
— (Integer
)The number of errors that have occurred during this task.
FreshStartDate
— (Date
)The date the replication task was started either with a fresh start or a target reload.
StartDate
— (Date
)The date the replication task was started either with a fresh start or a resume. For more information, see StartReplicationTaskType.
StopDate
— (Date
)The date the replication task was stopped.
FullLoadStartDate
— (Date
)The date the replication task full load was started.
FullLoadFinishDate
— (Date
)The date the replication task full load was completed.
TaskData
— (String
)Supplemental information that the task requires to migrate the data for certain source and target endpoints. For more information, see Specifying Supplemental Data for Task Settings in the Database Migration Service User Guide.
TargetReplicationInstanceArn
— (String
)The ARN of the replication instance to which this task is moved in response to running the
MoveReplicationTask
operation. Otherwise, this response parameter isn't a member of theReplicationTask
object.
-
(AWS.Response)
—
Returns:
See Also:
Generated on Wed Nov 10 23:39:20 2021 by yard 0.9.26 (ruby-2.3.8). - createEventSubscription(params = {}, callback) ⇒ AWS.Request