snowflake.Task
Explore with Pulumi AI
Import
$ pulumi import snowflake:index/task:Task example '"<database_name>"."<schema_name>"."<task_name>"'
Create Task Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new Task(name: string, args: TaskArgs, opts?: CustomResourceOptions);@overload
def Task(resource_name: str,
         args: TaskArgs,
         opts: Optional[ResourceOptions] = None)
@overload
def Task(resource_name: str,
         opts: Optional[ResourceOptions] = None,
         database: Optional[str] = None,
         schema: Optional[str] = None,
         sql_statement: Optional[str] = None,
         started: Optional[bool] = None,
         abort_detached_query: Optional[bool] = None,
         afters: Optional[Sequence[str]] = None,
         allow_overlapping_execution: Optional[str] = None,
         autocommit: Optional[bool] = None,
         binary_input_format: Optional[str] = None,
         binary_output_format: Optional[str] = None,
         client_memory_limit: Optional[int] = None,
         client_metadata_request_use_connection_ctx: Optional[bool] = None,
         client_prefetch_threads: Optional[int] = None,
         client_result_chunk_size: Optional[int] = None,
         client_result_column_case_insensitive: Optional[bool] = None,
         client_session_keep_alive: Optional[bool] = None,
         client_session_keep_alive_heartbeat_frequency: Optional[int] = None,
         client_timestamp_type_mapping: Optional[str] = None,
         comment: Optional[str] = None,
         config: Optional[str] = None,
         date_input_format: Optional[str] = None,
         date_output_format: Optional[str] = None,
         enable_unload_physical_type_optimization: Optional[bool] = None,
         error_integration: Optional[str] = None,
         error_on_nondeterministic_merge: Optional[bool] = None,
         error_on_nondeterministic_update: Optional[bool] = None,
         finalize: Optional[str] = None,
         geography_output_format: Optional[str] = None,
         geometry_output_format: Optional[str] = None,
         jdbc_treat_timestamp_ntz_as_utc: Optional[bool] = None,
         jdbc_use_session_timezone: Optional[bool] = None,
         json_indent: Optional[int] = None,
         lock_timeout: Optional[int] = None,
         log_level: Optional[str] = None,
         multi_statement_count: Optional[int] = None,
         name: Optional[str] = None,
         noorder_sequence_as_default: Optional[bool] = None,
         odbc_treat_decimal_as_int: Optional[bool] = None,
         query_tag: Optional[str] = None,
         quoted_identifiers_ignore_case: Optional[bool] = None,
         rows_per_resultset: Optional[int] = None,
         s3_stage_vpce_dns_name: Optional[str] = None,
         schedule: Optional[TaskScheduleArgs] = None,
         search_path: Optional[str] = None,
         statement_queued_timeout_in_seconds: Optional[int] = None,
         statement_timeout_in_seconds: Optional[int] = None,
         strict_json_output: Optional[bool] = None,
         suspend_task_after_num_failures: Optional[int] = None,
         task_auto_retry_attempts: Optional[int] = None,
         time_input_format: Optional[str] = None,
         time_output_format: Optional[str] = None,
         timestamp_day_is_always24h: Optional[bool] = None,
         timestamp_input_format: Optional[str] = None,
         timestamp_ltz_output_format: Optional[str] = None,
         timestamp_ntz_output_format: Optional[str] = None,
         timestamp_output_format: Optional[str] = None,
         timestamp_type_mapping: Optional[str] = None,
         timestamp_tz_output_format: Optional[str] = None,
         timezone: Optional[str] = None,
         trace_level: Optional[str] = None,
         transaction_abort_on_error: Optional[bool] = None,
         transaction_default_isolation_level: Optional[str] = None,
         two_digit_century_start: Optional[int] = None,
         unsupported_ddl_action: Optional[str] = None,
         use_cached_result: Optional[bool] = None,
         user_task_managed_initial_warehouse_size: Optional[str] = None,
         user_task_minimum_trigger_interval_in_seconds: Optional[int] = None,
         user_task_timeout_ms: Optional[int] = None,
         warehouse: Optional[str] = None,
         week_of_year_policy: Optional[int] = None,
         week_start: Optional[int] = None,
         when: Optional[str] = None)func NewTask(ctx *Context, name string, args TaskArgs, opts ...ResourceOption) (*Task, error)public Task(string name, TaskArgs args, CustomResourceOptions? opts = null)type: snowflake:Task
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
- name string
- The unique name of the resource.
- args TaskArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args TaskArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args TaskArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args TaskArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args TaskArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
Constructor example
The following reference example uses placeholder values for all input properties.
var taskResource = new Snowflake.Task("taskResource", new()
{
    Database = "string",
    Schema = "string",
    SqlStatement = "string",
    Started = false,
    AbortDetachedQuery = false,
    Afters = new[]
    {
        "string",
    },
    AllowOverlappingExecution = "string",
    Autocommit = false,
    BinaryInputFormat = "string",
    BinaryOutputFormat = "string",
    ClientMemoryLimit = 0,
    ClientMetadataRequestUseConnectionCtx = false,
    ClientPrefetchThreads = 0,
    ClientResultChunkSize = 0,
    ClientResultColumnCaseInsensitive = false,
    ClientSessionKeepAlive = false,
    ClientSessionKeepAliveHeartbeatFrequency = 0,
    ClientTimestampTypeMapping = "string",
    Comment = "string",
    Config = "string",
    DateInputFormat = "string",
    DateOutputFormat = "string",
    EnableUnloadPhysicalTypeOptimization = false,
    ErrorIntegration = "string",
    ErrorOnNondeterministicMerge = false,
    ErrorOnNondeterministicUpdate = false,
    Finalize = "string",
    GeographyOutputFormat = "string",
    GeometryOutputFormat = "string",
    JdbcTreatTimestampNtzAsUtc = false,
    JdbcUseSessionTimezone = false,
    JsonIndent = 0,
    LockTimeout = 0,
    LogLevel = "string",
    MultiStatementCount = 0,
    Name = "string",
    NoorderSequenceAsDefault = false,
    OdbcTreatDecimalAsInt = false,
    QueryTag = "string",
    QuotedIdentifiersIgnoreCase = false,
    RowsPerResultset = 0,
    S3StageVpceDnsName = "string",
    Schedule = new Snowflake.Inputs.TaskScheduleArgs
    {
        Minutes = 0,
        UsingCron = "string",
    },
    SearchPath = "string",
    StatementQueuedTimeoutInSeconds = 0,
    StatementTimeoutInSeconds = 0,
    StrictJsonOutput = false,
    SuspendTaskAfterNumFailures = 0,
    TaskAutoRetryAttempts = 0,
    TimeInputFormat = "string",
    TimeOutputFormat = "string",
    TimestampDayIsAlways24h = false,
    TimestampInputFormat = "string",
    TimestampLtzOutputFormat = "string",
    TimestampNtzOutputFormat = "string",
    TimestampOutputFormat = "string",
    TimestampTypeMapping = "string",
    TimestampTzOutputFormat = "string",
    Timezone = "string",
    TraceLevel = "string",
    TransactionAbortOnError = false,
    TransactionDefaultIsolationLevel = "string",
    TwoDigitCenturyStart = 0,
    UnsupportedDdlAction = "string",
    UseCachedResult = false,
    UserTaskManagedInitialWarehouseSize = "string",
    UserTaskMinimumTriggerIntervalInSeconds = 0,
    UserTaskTimeoutMs = 0,
    Warehouse = "string",
    WeekOfYearPolicy = 0,
    WeekStart = 0,
    When = "string",
});
example, err := snowflake.NewTask(ctx, "taskResource", &snowflake.TaskArgs{
	Database:           pulumi.String("string"),
	Schema:             pulumi.String("string"),
	SqlStatement:       pulumi.String("string"),
	Started:            pulumi.Bool(false),
	AbortDetachedQuery: pulumi.Bool(false),
	Afters: pulumi.StringArray{
		pulumi.String("string"),
	},
	AllowOverlappingExecution:                pulumi.String("string"),
	Autocommit:                               pulumi.Bool(false),
	BinaryInputFormat:                        pulumi.String("string"),
	BinaryOutputFormat:                       pulumi.String("string"),
	ClientMemoryLimit:                        pulumi.Int(0),
	ClientMetadataRequestUseConnectionCtx:    pulumi.Bool(false),
	ClientPrefetchThreads:                    pulumi.Int(0),
	ClientResultChunkSize:                    pulumi.Int(0),
	ClientResultColumnCaseInsensitive:        pulumi.Bool(false),
	ClientSessionKeepAlive:                   pulumi.Bool(false),
	ClientSessionKeepAliveHeartbeatFrequency: pulumi.Int(0),
	ClientTimestampTypeMapping:               pulumi.String("string"),
	Comment:                                  pulumi.String("string"),
	Config:                                   pulumi.String("string"),
	DateInputFormat:                          pulumi.String("string"),
	DateOutputFormat:                         pulumi.String("string"),
	EnableUnloadPhysicalTypeOptimization:     pulumi.Bool(false),
	ErrorIntegration:                         pulumi.String("string"),
	ErrorOnNondeterministicMerge:             pulumi.Bool(false),
	ErrorOnNondeterministicUpdate:            pulumi.Bool(false),
	Finalize:                                 pulumi.String("string"),
	GeographyOutputFormat:                    pulumi.String("string"),
	GeometryOutputFormat:                     pulumi.String("string"),
	JdbcTreatTimestampNtzAsUtc:               pulumi.Bool(false),
	JdbcUseSessionTimezone:                   pulumi.Bool(false),
	JsonIndent:                               pulumi.Int(0),
	LockTimeout:                              pulumi.Int(0),
	LogLevel:                                 pulumi.String("string"),
	MultiStatementCount:                      pulumi.Int(0),
	Name:                                     pulumi.String("string"),
	NoorderSequenceAsDefault:                 pulumi.Bool(false),
	OdbcTreatDecimalAsInt:                    pulumi.Bool(false),
	QueryTag:                                 pulumi.String("string"),
	QuotedIdentifiersIgnoreCase:              pulumi.Bool(false),
	RowsPerResultset:                         pulumi.Int(0),
	S3StageVpceDnsName:                       pulumi.String("string"),
	Schedule: &snowflake.TaskScheduleArgs{
		Minutes:   pulumi.Int(0),
		UsingCron: pulumi.String("string"),
	},
	SearchPath:                              pulumi.String("string"),
	StatementQueuedTimeoutInSeconds:         pulumi.Int(0),
	StatementTimeoutInSeconds:               pulumi.Int(0),
	StrictJsonOutput:                        pulumi.Bool(false),
	SuspendTaskAfterNumFailures:             pulumi.Int(0),
	TaskAutoRetryAttempts:                   pulumi.Int(0),
	TimeInputFormat:                         pulumi.String("string"),
	TimeOutputFormat:                        pulumi.String("string"),
	TimestampDayIsAlways24h:                 pulumi.Bool(false),
	TimestampInputFormat:                    pulumi.String("string"),
	TimestampLtzOutputFormat:                pulumi.String("string"),
	TimestampNtzOutputFormat:                pulumi.String("string"),
	TimestampOutputFormat:                   pulumi.String("string"),
	TimestampTypeMapping:                    pulumi.String("string"),
	TimestampTzOutputFormat:                 pulumi.String("string"),
	Timezone:                                pulumi.String("string"),
	TraceLevel:                              pulumi.String("string"),
	TransactionAbortOnError:                 pulumi.Bool(false),
	TransactionDefaultIsolationLevel:        pulumi.String("string"),
	TwoDigitCenturyStart:                    pulumi.Int(0),
	UnsupportedDdlAction:                    pulumi.String("string"),
	UseCachedResult:                         pulumi.Bool(false),
	UserTaskManagedInitialWarehouseSize:     pulumi.String("string"),
	UserTaskMinimumTriggerIntervalInSeconds: pulumi.Int(0),
	UserTaskTimeoutMs:                       pulumi.Int(0),
	Warehouse:                               pulumi.String("string"),
	WeekOfYearPolicy:                        pulumi.Int(0),
	WeekStart:                               pulumi.Int(0),
	When:                                    pulumi.String("string"),
})
var taskResource = new Task("taskResource", TaskArgs.builder()
    .database("string")
    .schema("string")
    .sqlStatement("string")
    .started(false)
    .abortDetachedQuery(false)
    .afters("string")
    .allowOverlappingExecution("string")
    .autocommit(false)
    .binaryInputFormat("string")
    .binaryOutputFormat("string")
    .clientMemoryLimit(0)
    .clientMetadataRequestUseConnectionCtx(false)
    .clientPrefetchThreads(0)
    .clientResultChunkSize(0)
    .clientResultColumnCaseInsensitive(false)
    .clientSessionKeepAlive(false)
    .clientSessionKeepAliveHeartbeatFrequency(0)
    .clientTimestampTypeMapping("string")
    .comment("string")
    .config("string")
    .dateInputFormat("string")
    .dateOutputFormat("string")
    .enableUnloadPhysicalTypeOptimization(false)
    .errorIntegration("string")
    .errorOnNondeterministicMerge(false)
    .errorOnNondeterministicUpdate(false)
    .finalize("string")
    .geographyOutputFormat("string")
    .geometryOutputFormat("string")
    .jdbcTreatTimestampNtzAsUtc(false)
    .jdbcUseSessionTimezone(false)
    .jsonIndent(0)
    .lockTimeout(0)
    .logLevel("string")
    .multiStatementCount(0)
    .name("string")
    .noorderSequenceAsDefault(false)
    .odbcTreatDecimalAsInt(false)
    .queryTag("string")
    .quotedIdentifiersIgnoreCase(false)
    .rowsPerResultset(0)
    .s3StageVpceDnsName("string")
    .schedule(TaskScheduleArgs.builder()
        .minutes(0)
        .usingCron("string")
        .build())
    .searchPath("string")
    .statementQueuedTimeoutInSeconds(0)
    .statementTimeoutInSeconds(0)
    .strictJsonOutput(false)
    .suspendTaskAfterNumFailures(0)
    .taskAutoRetryAttempts(0)
    .timeInputFormat("string")
    .timeOutputFormat("string")
    .timestampDayIsAlways24h(false)
    .timestampInputFormat("string")
    .timestampLtzOutputFormat("string")
    .timestampNtzOutputFormat("string")
    .timestampOutputFormat("string")
    .timestampTypeMapping("string")
    .timestampTzOutputFormat("string")
    .timezone("string")
    .traceLevel("string")
    .transactionAbortOnError(false)
    .transactionDefaultIsolationLevel("string")
    .twoDigitCenturyStart(0)
    .unsupportedDdlAction("string")
    .useCachedResult(false)
    .userTaskManagedInitialWarehouseSize("string")
    .userTaskMinimumTriggerIntervalInSeconds(0)
    .userTaskTimeoutMs(0)
    .warehouse("string")
    .weekOfYearPolicy(0)
    .weekStart(0)
    .when("string")
    .build());
task_resource = snowflake.Task("taskResource",
    database="string",
    schema="string",
    sql_statement="string",
    started=False,
    abort_detached_query=False,
    afters=["string"],
    allow_overlapping_execution="string",
    autocommit=False,
    binary_input_format="string",
    binary_output_format="string",
    client_memory_limit=0,
    client_metadata_request_use_connection_ctx=False,
    client_prefetch_threads=0,
    client_result_chunk_size=0,
    client_result_column_case_insensitive=False,
    client_session_keep_alive=False,
    client_session_keep_alive_heartbeat_frequency=0,
    client_timestamp_type_mapping="string",
    comment="string",
    config="string",
    date_input_format="string",
    date_output_format="string",
    enable_unload_physical_type_optimization=False,
    error_integration="string",
    error_on_nondeterministic_merge=False,
    error_on_nondeterministic_update=False,
    finalize="string",
    geography_output_format="string",
    geometry_output_format="string",
    jdbc_treat_timestamp_ntz_as_utc=False,
    jdbc_use_session_timezone=False,
    json_indent=0,
    lock_timeout=0,
    log_level="string",
    multi_statement_count=0,
    name="string",
    noorder_sequence_as_default=False,
    odbc_treat_decimal_as_int=False,
    query_tag="string",
    quoted_identifiers_ignore_case=False,
    rows_per_resultset=0,
    s3_stage_vpce_dns_name="string",
    schedule={
        "minutes": 0,
        "using_cron": "string",
    },
    search_path="string",
    statement_queued_timeout_in_seconds=0,
    statement_timeout_in_seconds=0,
    strict_json_output=False,
    suspend_task_after_num_failures=0,
    task_auto_retry_attempts=0,
    time_input_format="string",
    time_output_format="string",
    timestamp_day_is_always24h=False,
    timestamp_input_format="string",
    timestamp_ltz_output_format="string",
    timestamp_ntz_output_format="string",
    timestamp_output_format="string",
    timestamp_type_mapping="string",
    timestamp_tz_output_format="string",
    timezone="string",
    trace_level="string",
    transaction_abort_on_error=False,
    transaction_default_isolation_level="string",
    two_digit_century_start=0,
    unsupported_ddl_action="string",
    use_cached_result=False,
    user_task_managed_initial_warehouse_size="string",
    user_task_minimum_trigger_interval_in_seconds=0,
    user_task_timeout_ms=0,
    warehouse="string",
    week_of_year_policy=0,
    week_start=0,
    when="string")
const taskResource = new snowflake.Task("taskResource", {
    database: "string",
    schema: "string",
    sqlStatement: "string",
    started: false,
    abortDetachedQuery: false,
    afters: ["string"],
    allowOverlappingExecution: "string",
    autocommit: false,
    binaryInputFormat: "string",
    binaryOutputFormat: "string",
    clientMemoryLimit: 0,
    clientMetadataRequestUseConnectionCtx: false,
    clientPrefetchThreads: 0,
    clientResultChunkSize: 0,
    clientResultColumnCaseInsensitive: false,
    clientSessionKeepAlive: false,
    clientSessionKeepAliveHeartbeatFrequency: 0,
    clientTimestampTypeMapping: "string",
    comment: "string",
    config: "string",
    dateInputFormat: "string",
    dateOutputFormat: "string",
    enableUnloadPhysicalTypeOptimization: false,
    errorIntegration: "string",
    errorOnNondeterministicMerge: false,
    errorOnNondeterministicUpdate: false,
    finalize: "string",
    geographyOutputFormat: "string",
    geometryOutputFormat: "string",
    jdbcTreatTimestampNtzAsUtc: false,
    jdbcUseSessionTimezone: false,
    jsonIndent: 0,
    lockTimeout: 0,
    logLevel: "string",
    multiStatementCount: 0,
    name: "string",
    noorderSequenceAsDefault: false,
    odbcTreatDecimalAsInt: false,
    queryTag: "string",
    quotedIdentifiersIgnoreCase: false,
    rowsPerResultset: 0,
    s3StageVpceDnsName: "string",
    schedule: {
        minutes: 0,
        usingCron: "string",
    },
    searchPath: "string",
    statementQueuedTimeoutInSeconds: 0,
    statementTimeoutInSeconds: 0,
    strictJsonOutput: false,
    suspendTaskAfterNumFailures: 0,
    taskAutoRetryAttempts: 0,
    timeInputFormat: "string",
    timeOutputFormat: "string",
    timestampDayIsAlways24h: false,
    timestampInputFormat: "string",
    timestampLtzOutputFormat: "string",
    timestampNtzOutputFormat: "string",
    timestampOutputFormat: "string",
    timestampTypeMapping: "string",
    timestampTzOutputFormat: "string",
    timezone: "string",
    traceLevel: "string",
    transactionAbortOnError: false,
    transactionDefaultIsolationLevel: "string",
    twoDigitCenturyStart: 0,
    unsupportedDdlAction: "string",
    useCachedResult: false,
    userTaskManagedInitialWarehouseSize: "string",
    userTaskMinimumTriggerIntervalInSeconds: 0,
    userTaskTimeoutMs: 0,
    warehouse: "string",
    weekOfYearPolicy: 0,
    weekStart: 0,
    when: "string",
});
type: snowflake:Task
properties:
    abortDetachedQuery: false
    afters:
        - string
    allowOverlappingExecution: string
    autocommit: false
    binaryInputFormat: string
    binaryOutputFormat: string
    clientMemoryLimit: 0
    clientMetadataRequestUseConnectionCtx: false
    clientPrefetchThreads: 0
    clientResultChunkSize: 0
    clientResultColumnCaseInsensitive: false
    clientSessionKeepAlive: false
    clientSessionKeepAliveHeartbeatFrequency: 0
    clientTimestampTypeMapping: string
    comment: string
    config: string
    database: string
    dateInputFormat: string
    dateOutputFormat: string
    enableUnloadPhysicalTypeOptimization: false
    errorIntegration: string
    errorOnNondeterministicMerge: false
    errorOnNondeterministicUpdate: false
    finalize: string
    geographyOutputFormat: string
    geometryOutputFormat: string
    jdbcTreatTimestampNtzAsUtc: false
    jdbcUseSessionTimezone: false
    jsonIndent: 0
    lockTimeout: 0
    logLevel: string
    multiStatementCount: 0
    name: string
    noorderSequenceAsDefault: false
    odbcTreatDecimalAsInt: false
    queryTag: string
    quotedIdentifiersIgnoreCase: false
    rowsPerResultset: 0
    s3StageVpceDnsName: string
    schedule:
        minutes: 0
        usingCron: string
    schema: string
    searchPath: string
    sqlStatement: string
    started: false
    statementQueuedTimeoutInSeconds: 0
    statementTimeoutInSeconds: 0
    strictJsonOutput: false
    suspendTaskAfterNumFailures: 0
    taskAutoRetryAttempts: 0
    timeInputFormat: string
    timeOutputFormat: string
    timestampDayIsAlways24h: false
    timestampInputFormat: string
    timestampLtzOutputFormat: string
    timestampNtzOutputFormat: string
    timestampOutputFormat: string
    timestampTypeMapping: string
    timestampTzOutputFormat: string
    timezone: string
    traceLevel: string
    transactionAbortOnError: false
    transactionDefaultIsolationLevel: string
    twoDigitCenturyStart: 0
    unsupportedDdlAction: string
    useCachedResult: false
    userTaskManagedInitialWarehouseSize: string
    userTaskMinimumTriggerIntervalInSeconds: 0
    userTaskTimeoutMs: 0
    warehouse: string
    weekOfYearPolicy: 0
    weekStart: 0
    when: string
Task Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.
The Task resource accepts the following input properties:
- Database string
- The database in which to create the task. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- Schema string
- The schema in which to create the task. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- SqlStatement string
- Any single SQL statement, or a call to a stored procedure, executed when the task runs.
- Started bool
- Specifies if the task should be started or suspended.
- AbortDetached boolQuery 
- Specifies the action that Snowflake performs for in-progress queries if connectivity is lost due to abrupt termination of a session (e.g. network outage, browser termination, service interruption). For more information, check ABORTDETACHEDQUERY docs.
- Afters List<string>
- Specifies one or more predecessor tasks for the current task. Use this option to create a DAG of tasks or add this task to an existing DAG. A DAG is a series of tasks that starts with a scheduled root task and is linked together by dependencies. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- AllowOverlapping stringExecution 
- By default, Snowflake ensures that only one instance of a particular DAG is allowed to run at a time, setting the parameter value to TRUE permits DAG runs to overlap. Available options are: "true" or "false". When the value is not set in the configuration the provider will put "default" there which means to use the Snowflake default for this value.
- Autocommit bool
- Specifies whether autocommit is enabled for the session. Autocommit determines whether a DML statement, when executed without an active transaction, is automatically committed after the statement successfully completes. For more information, see Transactions. For more information, check AUTOCOMMIT docs.
- BinaryInput stringFormat 
- The format of VARCHAR values passed as input to VARCHAR-to-BINARY conversion functions. For more information, see Binary input and output. For more information, check BINARYINPUTFORMAT docs.
- BinaryOutput stringFormat 
- The format for VARCHAR values returned as output by BINARY-to-VARCHAR conversion functions. For more information, see Binary input and output. For more information, check BINARYOUTPUTFORMAT docs.
- ClientMemory intLimit 
- Parameter that specifies the maximum amount of memory the JDBC driver or ODBC driver should use for the result set from queries (in MB). For more information, check CLIENTMEMORYLIMIT docs.
- ClientMetadata boolRequest Use Connection Ctx 
- For specific ODBC functions and JDBC methods, this parameter can change the default search scope from all databases/schemas to the current database/schema. The narrower search typically returns fewer rows and executes more quickly. For more information, check CLIENTMETADATAREQUESTUSECONNECTION_CTX docs.
- ClientPrefetch intThreads 
- Parameter that specifies the number of threads used by the client to pre-fetch large result sets. The driver will attempt to honor the parameter value, but defines the minimum and maximum values (depending on your system’s resources) to improve performance. For more information, check CLIENTPREFETCHTHREADS docs.
- ClientResult intChunk Size 
- Parameter that specifies the maximum size of each set (or chunk) of query results to download (in MB). The JDBC driver downloads query results in chunks. For more information, check CLIENTRESULTCHUNK_SIZE docs.
- ClientResult boolColumn Case Insensitive 
- Parameter that indicates whether to match column name case-insensitively in ResultSet.get* methods in JDBC. For more information, check CLIENTRESULTCOLUMNCASEINSENSITIVE docs.
- ClientSession boolKeep Alive 
- Parameter that indicates whether to force a user to log in again after a period of inactivity in the session. For more information, check CLIENTSESSIONKEEP_ALIVE docs.
- ClientSession intKeep Alive Heartbeat Frequency 
- Number of seconds in-between client attempts to update the token for the session. For more information, check CLIENTSESSIONKEEPALIVEHEARTBEAT_FREQUENCY docs.
- ClientTimestamp stringType Mapping 
- Specifies the TIMESTAMP_* variation to use when binding timestamp variables for JDBC or ODBC applications that use the bind API to load data. For more information, check CLIENTTIMESTAMPTYPE_MAPPING docs.
- Comment string
- Specifies a comment for the task.
- Config string
- Specifies a string representation of key value pairs that can be accessed by all tasks in the task graph. Must be in JSON format.
- DateInput stringFormat 
- Specifies the input format for the DATE data type. For more information, see Date and time input and output formats. For more information, check DATEINPUTFORMAT docs.
- DateOutput stringFormat 
- Specifies the display format for the DATE data type. For more information, see Date and time input and output formats. For more information, check DATEOUTPUTFORMAT docs.
- EnableUnload boolPhysical Type Optimization 
- Specifies whether to set the schema for unloaded Parquet files based on the logical column data types (i.e. the types in the unload SQL query or source table) or on the unloaded column values (i.e. the smallest data types and precision that support the values in the output columns of the unload SQL statement or source table). For more information, check ENABLEUNLOADPHYSICALTYPEOPTIMIZATION docs.
- ErrorIntegration string
- Specifies the name of the notification integration used for error notifications. Due to technical limitations (read more here), avoid using the following characters: |,.,". For more information about this resource, see docs.
- ErrorOn boolNondeterministic Merge 
- Specifies whether to return an error when the MERGE command is used to update or delete a target row that joins multiple source rows and the system cannot determine the action to perform on the target row. For more information, check ERRORONNONDETERMINISTIC_MERGE docs.
- ErrorOn boolNondeterministic Update 
- Specifies whether to return an error when the UPDATE command is used to update a target row that joins multiple source rows and the system cannot determine the action to perform on the target row. For more information, check ERRORONNONDETERMINISTIC_UPDATE docs.
- Finalize string
- Specifies the name of a root task that the finalizer task is associated with. Finalizer tasks run after all other tasks in the task graph run to completion. You can define the SQL of a finalizer task to handle notifications and the release and cleanup of resources that a task graph uses. For more information, see Release and cleanup of task graphs. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- GeographyOutput stringFormat 
- Display format for GEOGRAPHY values. For more information, check GEOGRAPHYOUTPUTFORMAT docs.
- GeometryOutput stringFormat 
- Display format for GEOMETRY values. For more information, check GEOMETRYOUTPUTFORMAT docs.
- JdbcTreat boolTimestamp Ntz As Utc 
- Specifies how JDBC processes TIMESTAMPNTZ values. For more information, check TREATTIMESTAMPNTZASUTC docsJDBC.
- JdbcUse boolSession Timezone 
- Specifies whether the JDBC Driver uses the time zone of the JVM or the time zone of the session (specified by the TIMEZONE parameter) for the getDate(), getTime(), and getTimestamp() methods of the ResultSet class. For more information, check JDBCUSESESSION_TIMEZONE docs.
- JsonIndent int
- Specifies the number of blank spaces to indent each new element in JSON output in the session. Also specifies whether to insert newline characters after each element. For more information, check JSON_INDENT docs.
- LockTimeout int
- Number of seconds to wait while trying to lock a resource, before timing out and aborting the statement. For more information, check LOCK_TIMEOUT docs.
- LogLevel string
- Specifies the severity level of messages that should be ingested and made available in the active event table. Messages at the specified level (and at more severe levels) are ingested. For more information about log levels, see Setting log level. For more information, check LOG_LEVEL docs.
- MultiStatement intCount 
- Number of statements to execute when using the multi-statement capability. For more information, check MULTISTATEMENTCOUNT docs.
- Name string
- Specifies the identifier for the task; must be unique for the database and schema in which the task is created. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- NoorderSequence boolAs Default 
- Specifies whether the ORDER or NOORDER property is set by default when you create a new sequence or add a new table column. The ORDER and NOORDER properties determine whether or not the values are generated for the sequence or auto-incremented column in increasing or decreasing order. For more information, check NOORDERSEQUENCEAS_DEFAULT docs.
- OdbcTreat boolDecimal As Int 
- Specifies how ODBC processes columns that have a scale of zero (0). For more information, check ODBCTREATDECIMALASINT docs.
- QueryTag string
- Optional string that can be used to tag queries and other SQL statements executed within a session. The tags are displayed in the output of the QUERYHISTORY, QUERYHISTORY*BY** functions. For more information, check QUERY_TAG docs.
- QuotedIdentifiers boolIgnore Case 
- Specifies whether letters in double-quoted object identifiers are stored and resolved as uppercase letters. By default, Snowflake preserves the case of alphabetic characters when storing and resolving double-quoted identifiers (see Identifier resolution). You can use this parameter in situations in which third-party applications always use double quotes around identifiers. For more information, check QUOTEDIDENTIFIERSIGNORE_CASE docs.
- RowsPer intResultset 
- Specifies the maximum number of rows returned in a result set. A value of 0 specifies no maximum. For more information, check ROWSPERRESULTSET docs.
- S3StageVpce stringDns Name 
- Specifies the DNS name of an Amazon S3 interface endpoint. Requests sent to the internal stage of an account via AWS PrivateLink for Amazon S3 use this endpoint to connect. For more information, see Accessing Internal stages with dedicated interface endpoints. For more information, check S3STAGEVPCEDNSNAME docs.
- Schedule
TaskSchedule 
- The schedule for periodically running the task. This can be a cron or interval in minutes. (Conflicts with finalize and after; when set, one of the sub-fields minutesorusing_cronshould be set)
- SearchPath string
- Specifies the path to search to resolve unqualified object names in queries. For more information, see Name resolution in queries. Comma-separated list of identifiers. An identifier can be a fully or partially qualified schema name. For more information, check SEARCH_PATH docs.
- StatementQueued intTimeout In Seconds 
- Amount of time, in seconds, a SQL statement (query, DDL, DML, etc.) remains queued for a warehouse before it is canceled by the system. This parameter can be used in conjunction with the MAXCONCURRENCYLEVEL parameter to ensure a warehouse is never backlogged. For more information, check STATEMENTQUEUEDTIMEOUTINSECONDS docs.
- StatementTimeout intIn Seconds 
- Amount of time, in seconds, after which a running SQL statement (query, DDL, DML, etc.) is canceled by the system. For more information, check STATEMENTTIMEOUTIN_SECONDS docs.
- StrictJson boolOutput 
- This parameter specifies whether JSON output in a session is compatible with the general standard (as described by http://json.org). By design, Snowflake allows JSON input that contains non-standard values; however, these non-standard values might result in Snowflake outputting JSON that is incompatible with other platforms and languages. This parameter, when enabled, ensures that Snowflake outputs valid/compatible JSON. For more information, check STRICTJSONOUTPUT docs.
- SuspendTask intAfter Num Failures 
- Specifies the number of consecutive failed task runs after which the current task is suspended automatically. The default is 0 (no automatic suspension). For more information, check SUSPENDTASKAFTERNUMFAILURES docs.
- TaskAuto intRetry Attempts 
- Specifies the number of automatic task graph retry attempts. If any task graphs complete in a FAILED state, Snowflake can automatically retry the task graphs from the last task in the graph that failed. For more information, check TASKAUTORETRY_ATTEMPTS docs.
- TimeInput stringFormat 
- Specifies the input format for the TIME data type. For more information, see Date and time input and output formats. Any valid, supported time format or AUTO (AUTO specifies that Snowflake attempts to automatically detect the format of times stored in the system during the session). For more information, check TIMEINPUTFORMAT docs.
- TimeOutput stringFormat 
- Specifies the display format for the TIME data type. For more information, see Date and time input and output formats. For more information, check TIMEOUTPUTFORMAT docs.
- TimestampDay boolIs Always24h 
- Specifies whether the DATEADD function (and its aliases) always consider a day to be exactly 24 hours for expressions that span multiple days. For more information, check TIMESTAMPDAYISALWAYS24H docs.
- TimestampInput stringFormat 
- Specifies the input format for the TIMESTAMP data type alias. For more information, see Date and time input and output formats. Any valid, supported timestamp format or AUTO (AUTO specifies that Snowflake attempts to automatically detect the format of timestamps stored in the system during the session). For more information, check TIMESTAMPINPUTFORMAT docs.
- TimestampLtz stringOutput Format 
- Specifies the display format for the TIMESTAMPLTZ data type. If no format is specified, defaults to OUTPUT*FORMATTIMESTAMP. For more information, see Date and time input and output formats. For more information, check TIMESTAMPLTZOUTPUT*FORMAT docs.
- TimestampNtz stringOutput Format 
- Specifies the display format for the TIMESTAMPNTZ data type. For more information, check NTZOUTPUTFORMAT docsTIMESTAMP.
- TimestampOutput stringFormat 
- Specifies the display format for the TIMESTAMP data type alias. For more information, see Date and time input and output formats. For more information, check TIMESTAMPOUTPUTFORMAT docs.
- TimestampType stringMapping 
- Specifies the TIMESTAMP** variation that the TIMESTAMP data type alias maps to. For more information, check TIMESTAMP*TYPE_MAPPING docs.
- TimestampTz stringOutput Format 
- Specifies the display format for the TIMESTAMPTZ data type. If no format is specified, defaults to OUTPUT*FORMATTIMESTAMP. For more information, see Date and time input and output formats. For more information, check TIMESTAMPTZOUTPUT*FORMAT docs.
- Timezone string
- Specifies the time zone for the session. You can specify a time zone name or a link name from release 2021a of the IANA Time Zone Database (e.g. America/Los_Angeles, Europe/London, UTC, Etc/GMT, etc.). For more information, check TIMEZONE docs.
- TraceLevel string
- Controls how trace events are ingested into the event table. For more information about trace levels, see Setting trace level. For more information, check TRACE_LEVEL docs.
- TransactionAbort boolOn Error 
- Specifies the action to perform when a statement issued within a non-autocommit transaction returns with an error. For more information, check TRANSACTIONABORTON_ERROR docs.
- TransactionDefault stringIsolation Level 
- Specifies the isolation level for transactions in the user session. For more information, check TRANSACTIONDEFAULTISOLATION_LEVEL docs.
- TwoDigit intCentury Start 
- Specifies the “century start” year for 2-digit years (i.e. the earliest year such dates can represent). This parameter prevents ambiguous dates when importing or converting data with the YYdate format component (i.e. years represented as 2 digits). For more information, check TWODIGITCENTURY_START docs.
- UnsupportedDdl stringAction 
- Determines if an unsupported (i.e. non-default) value specified for a constraint property returns an error. For more information, check UNSUPPORTEDDDLACTION docs.
- UseCached boolResult 
- Specifies whether to reuse persisted query results, if available, when a matching query is submitted. For more information, check USECACHEDRESULT docs.
- UserTask stringManaged Initial Warehouse Size 
- Specifies the size of the compute resources to provision for the first run of the task, before a task history is available for Snowflake to determine an ideal size. Once a task has successfully completed a few runs, Snowflake ignores this parameter setting. Valid values are (case-insensitive): %s. (Conflicts with warehouse). For more information about warehouses, see docs. For more information, check USERTASKMANAGEDINITIALWAREHOUSE_SIZE docs.
- UserTask intMinimum Trigger Interval In Seconds 
- Minimum amount of time between Triggered Task executions in seconds For more information, check USERTASKMINIMUMTRIGGERINTERVALINSECONDS docs.
- UserTask intTimeout Ms 
- Specifies the time limit on a single run of the task before it times out (in milliseconds). For more information, check USERTASKTIMEOUT_MS docs.
- Warehouse string
- The warehouse the task will use. Omit this parameter to use Snowflake-managed compute resources for runs of this task. Due to Snowflake limitations warehouse identifier can consist of only upper-cased letters. (Conflicts with usertaskmanagedinitialwarehouse_size) For more information about this resource, see docs.
- WeekOf intYear Policy 
- Specifies how the weeks in a given year are computed. 0: The semantics used are equivalent to the ISO semantics, in which a week belongs to a given year if at least 4 days of that week are in that year.1: January 1 is included in the first week of the year and December 31 is included in the last week of the year. For more information, check WEEKOFYEAR_POLICY docs.
- WeekStart int
- Specifies the first day of the week (used by week-related date functions). 0: Legacy Snowflake behavior is used (i.e. ISO-like semantics).1(Monday) to7(Sunday): All the week-related functions use weeks that start on the specified day of the week. For more information, check WEEK_START docs.
- When string
- Specifies a Boolean SQL expression; multiple conditions joined with AND/OR are supported. When a task is triggered (based on its SCHEDULE or AFTER setting), it validates the conditions of the expression to determine whether to execute. If the conditions of the expression are not met, then the task skips the current run. Any tasks that identify this task as a predecessor also don’t run.
- Database string
- The database in which to create the task. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- Schema string
- The schema in which to create the task. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- SqlStatement string
- Any single SQL statement, or a call to a stored procedure, executed when the task runs.
- Started bool
- Specifies if the task should be started or suspended.
- AbortDetached boolQuery 
- Specifies the action that Snowflake performs for in-progress queries if connectivity is lost due to abrupt termination of a session (e.g. network outage, browser termination, service interruption). For more information, check ABORTDETACHEDQUERY docs.
- Afters []string
- Specifies one or more predecessor tasks for the current task. Use this option to create a DAG of tasks or add this task to an existing DAG. A DAG is a series of tasks that starts with a scheduled root task and is linked together by dependencies. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- AllowOverlapping stringExecution 
- By default, Snowflake ensures that only one instance of a particular DAG is allowed to run at a time, setting the parameter value to TRUE permits DAG runs to overlap. Available options are: "true" or "false". When the value is not set in the configuration the provider will put "default" there which means to use the Snowflake default for this value.
- Autocommit bool
- Specifies whether autocommit is enabled for the session. Autocommit determines whether a DML statement, when executed without an active transaction, is automatically committed after the statement successfully completes. For more information, see Transactions. For more information, check AUTOCOMMIT docs.
- BinaryInput stringFormat 
- The format of VARCHAR values passed as input to VARCHAR-to-BINARY conversion functions. For more information, see Binary input and output. For more information, check BINARYINPUTFORMAT docs.
- BinaryOutput stringFormat 
- The format for VARCHAR values returned as output by BINARY-to-VARCHAR conversion functions. For more information, see Binary input and output. For more information, check BINARYOUTPUTFORMAT docs.
- ClientMemory intLimit 
- Parameter that specifies the maximum amount of memory the JDBC driver or ODBC driver should use for the result set from queries (in MB). For more information, check CLIENTMEMORYLIMIT docs.
- ClientMetadata boolRequest Use Connection Ctx 
- For specific ODBC functions and JDBC methods, this parameter can change the default search scope from all databases/schemas to the current database/schema. The narrower search typically returns fewer rows and executes more quickly. For more information, check CLIENTMETADATAREQUESTUSECONNECTION_CTX docs.
- ClientPrefetch intThreads 
- Parameter that specifies the number of threads used by the client to pre-fetch large result sets. The driver will attempt to honor the parameter value, but defines the minimum and maximum values (depending on your system’s resources) to improve performance. For more information, check CLIENTPREFETCHTHREADS docs.
- ClientResult intChunk Size 
- Parameter that specifies the maximum size of each set (or chunk) of query results to download (in MB). The JDBC driver downloads query results in chunks. For more information, check CLIENTRESULTCHUNK_SIZE docs.
- ClientResult boolColumn Case Insensitive 
- Parameter that indicates whether to match column name case-insensitively in ResultSet.get* methods in JDBC. For more information, check CLIENTRESULTCOLUMNCASEINSENSITIVE docs.
- ClientSession boolKeep Alive 
- Parameter that indicates whether to force a user to log in again after a period of inactivity in the session. For more information, check CLIENTSESSIONKEEP_ALIVE docs.
- ClientSession intKeep Alive Heartbeat Frequency 
- Number of seconds in-between client attempts to update the token for the session. For more information, check CLIENTSESSIONKEEPALIVEHEARTBEAT_FREQUENCY docs.
- ClientTimestamp stringType Mapping 
- Specifies the TIMESTAMP_* variation to use when binding timestamp variables for JDBC or ODBC applications that use the bind API to load data. For more information, check CLIENTTIMESTAMPTYPE_MAPPING docs.
- Comment string
- Specifies a comment for the task.
- Config string
- Specifies a string representation of key value pairs that can be accessed by all tasks in the task graph. Must be in JSON format.
- DateInput stringFormat 
- Specifies the input format for the DATE data type. For more information, see Date and time input and output formats. For more information, check DATEINPUTFORMAT docs.
- DateOutput stringFormat 
- Specifies the display format for the DATE data type. For more information, see Date and time input and output formats. For more information, check DATEOUTPUTFORMAT docs.
- EnableUnload boolPhysical Type Optimization 
- Specifies whether to set the schema for unloaded Parquet files based on the logical column data types (i.e. the types in the unload SQL query or source table) or on the unloaded column values (i.e. the smallest data types and precision that support the values in the output columns of the unload SQL statement or source table). For more information, check ENABLEUNLOADPHYSICALTYPEOPTIMIZATION docs.
- ErrorIntegration string
- Specifies the name of the notification integration used for error notifications. Due to technical limitations (read more here), avoid using the following characters: |,.,". For more information about this resource, see docs.
- ErrorOn boolNondeterministic Merge 
- Specifies whether to return an error when the MERGE command is used to update or delete a target row that joins multiple source rows and the system cannot determine the action to perform on the target row. For more information, check ERRORONNONDETERMINISTIC_MERGE docs.
- ErrorOn boolNondeterministic Update 
- Specifies whether to return an error when the UPDATE command is used to update a target row that joins multiple source rows and the system cannot determine the action to perform on the target row. For more information, check ERRORONNONDETERMINISTIC_UPDATE docs.
- Finalize string
- Specifies the name of a root task that the finalizer task is associated with. Finalizer tasks run after all other tasks in the task graph run to completion. You can define the SQL of a finalizer task to handle notifications and the release and cleanup of resources that a task graph uses. For more information, see Release and cleanup of task graphs. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- GeographyOutput stringFormat 
- Display format for GEOGRAPHY values. For more information, check GEOGRAPHYOUTPUTFORMAT docs.
- GeometryOutput stringFormat 
- Display format for GEOMETRY values. For more information, check GEOMETRYOUTPUTFORMAT docs.
- JdbcTreat boolTimestamp Ntz As Utc 
- Specifies how JDBC processes TIMESTAMPNTZ values. For more information, check TREATTIMESTAMPNTZASUTC docsJDBC.
- JdbcUse boolSession Timezone 
- Specifies whether the JDBC Driver uses the time zone of the JVM or the time zone of the session (specified by the TIMEZONE parameter) for the getDate(), getTime(), and getTimestamp() methods of the ResultSet class. For more information, check JDBCUSESESSION_TIMEZONE docs.
- JsonIndent int
- Specifies the number of blank spaces to indent each new element in JSON output in the session. Also specifies whether to insert newline characters after each element. For more information, check JSON_INDENT docs.
- LockTimeout int
- Number of seconds to wait while trying to lock a resource, before timing out and aborting the statement. For more information, check LOCK_TIMEOUT docs.
- LogLevel string
- Specifies the severity level of messages that should be ingested and made available in the active event table. Messages at the specified level (and at more severe levels) are ingested. For more information about log levels, see Setting log level. For more information, check LOG_LEVEL docs.
- MultiStatement intCount 
- Number of statements to execute when using the multi-statement capability. For more information, check MULTISTATEMENTCOUNT docs.
- Name string
- Specifies the identifier for the task; must be unique for the database and schema in which the task is created. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- NoorderSequence boolAs Default 
- Specifies whether the ORDER or NOORDER property is set by default when you create a new sequence or add a new table column. The ORDER and NOORDER properties determine whether or not the values are generated for the sequence or auto-incremented column in increasing or decreasing order. For more information, check NOORDERSEQUENCEAS_DEFAULT docs.
- OdbcTreat boolDecimal As Int 
- Specifies how ODBC processes columns that have a scale of zero (0). For more information, check ODBCTREATDECIMALASINT docs.
- QueryTag string
- Optional string that can be used to tag queries and other SQL statements executed within a session. The tags are displayed in the output of the QUERYHISTORY, QUERYHISTORY*BY** functions. For more information, check QUERY_TAG docs.
- QuotedIdentifiers boolIgnore Case 
- Specifies whether letters in double-quoted object identifiers are stored and resolved as uppercase letters. By default, Snowflake preserves the case of alphabetic characters when storing and resolving double-quoted identifiers (see Identifier resolution). You can use this parameter in situations in which third-party applications always use double quotes around identifiers. For more information, check QUOTEDIDENTIFIERSIGNORE_CASE docs.
- RowsPer intResultset 
- Specifies the maximum number of rows returned in a result set. A value of 0 specifies no maximum. For more information, check ROWSPERRESULTSET docs.
- S3StageVpce stringDns Name 
- Specifies the DNS name of an Amazon S3 interface endpoint. Requests sent to the internal stage of an account via AWS PrivateLink for Amazon S3 use this endpoint to connect. For more information, see Accessing Internal stages with dedicated interface endpoints. For more information, check S3STAGEVPCEDNSNAME docs.
- Schedule
TaskSchedule Args 
- The schedule for periodically running the task. This can be a cron or interval in minutes. (Conflicts with finalize and after; when set, one of the sub-fields minutesorusing_cronshould be set)
- SearchPath string
- Specifies the path to search to resolve unqualified object names in queries. For more information, see Name resolution in queries. Comma-separated list of identifiers. An identifier can be a fully or partially qualified schema name. For more information, check SEARCH_PATH docs.
- StatementQueued intTimeout In Seconds 
- Amount of time, in seconds, a SQL statement (query, DDL, DML, etc.) remains queued for a warehouse before it is canceled by the system. This parameter can be used in conjunction with the MAXCONCURRENCYLEVEL parameter to ensure a warehouse is never backlogged. For more information, check STATEMENTQUEUEDTIMEOUTINSECONDS docs.
- StatementTimeout intIn Seconds 
- Amount of time, in seconds, after which a running SQL statement (query, DDL, DML, etc.) is canceled by the system. For more information, check STATEMENTTIMEOUTIN_SECONDS docs.
- StrictJson boolOutput 
- This parameter specifies whether JSON output in a session is compatible with the general standard (as described by http://json.org). By design, Snowflake allows JSON input that contains non-standard values; however, these non-standard values might result in Snowflake outputting JSON that is incompatible with other platforms and languages. This parameter, when enabled, ensures that Snowflake outputs valid/compatible JSON. For more information, check STRICTJSONOUTPUT docs.
- SuspendTask intAfter Num Failures 
- Specifies the number of consecutive failed task runs after which the current task is suspended automatically. The default is 0 (no automatic suspension). For more information, check SUSPENDTASKAFTERNUMFAILURES docs.
- TaskAuto intRetry Attempts 
- Specifies the number of automatic task graph retry attempts. If any task graphs complete in a FAILED state, Snowflake can automatically retry the task graphs from the last task in the graph that failed. For more information, check TASKAUTORETRY_ATTEMPTS docs.
- TimeInput stringFormat 
- Specifies the input format for the TIME data type. For more information, see Date and time input and output formats. Any valid, supported time format or AUTO (AUTO specifies that Snowflake attempts to automatically detect the format of times stored in the system during the session). For more information, check TIMEINPUTFORMAT docs.
- TimeOutput stringFormat 
- Specifies the display format for the TIME data type. For more information, see Date and time input and output formats. For more information, check TIMEOUTPUTFORMAT docs.
- TimestampDay boolIs Always24h 
- Specifies whether the DATEADD function (and its aliases) always consider a day to be exactly 24 hours for expressions that span multiple days. For more information, check TIMESTAMPDAYISALWAYS24H docs.
- TimestampInput stringFormat 
- Specifies the input format for the TIMESTAMP data type alias. For more information, see Date and time input and output formats. Any valid, supported timestamp format or AUTO (AUTO specifies that Snowflake attempts to automatically detect the format of timestamps stored in the system during the session). For more information, check TIMESTAMPINPUTFORMAT docs.
- TimestampLtz stringOutput Format 
- Specifies the display format for the TIMESTAMPLTZ data type. If no format is specified, defaults to OUTPUT*FORMATTIMESTAMP. For more information, see Date and time input and output formats. For more information, check TIMESTAMPLTZOUTPUT*FORMAT docs.
- TimestampNtz stringOutput Format 
- Specifies the display format for the TIMESTAMPNTZ data type. For more information, check NTZOUTPUTFORMAT docsTIMESTAMP.
- TimestampOutput stringFormat 
- Specifies the display format for the TIMESTAMP data type alias. For more information, see Date and time input and output formats. For more information, check TIMESTAMPOUTPUTFORMAT docs.
- TimestampType stringMapping 
- Specifies the TIMESTAMP** variation that the TIMESTAMP data type alias maps to. For more information, check TIMESTAMP*TYPE_MAPPING docs.
- TimestampTz stringOutput Format 
- Specifies the display format for the TIMESTAMPTZ data type. If no format is specified, defaults to OUTPUT*FORMATTIMESTAMP. For more information, see Date and time input and output formats. For more information, check TIMESTAMPTZOUTPUT*FORMAT docs.
- Timezone string
- Specifies the time zone for the session. You can specify a time zone name or a link name from release 2021a of the IANA Time Zone Database (e.g. America/Los_Angeles, Europe/London, UTC, Etc/GMT, etc.). For more information, check TIMEZONE docs.
- TraceLevel string
- Controls how trace events are ingested into the event table. For more information about trace levels, see Setting trace level. For more information, check TRACE_LEVEL docs.
- TransactionAbort boolOn Error 
- Specifies the action to perform when a statement issued within a non-autocommit transaction returns with an error. For more information, check TRANSACTIONABORTON_ERROR docs.
- TransactionDefault stringIsolation Level 
- Specifies the isolation level for transactions in the user session. For more information, check TRANSACTIONDEFAULTISOLATION_LEVEL docs.
- TwoDigit intCentury Start 
- Specifies the “century start” year for 2-digit years (i.e. the earliest year such dates can represent). This parameter prevents ambiguous dates when importing or converting data with the YYdate format component (i.e. years represented as 2 digits). For more information, check TWODIGITCENTURY_START docs.
- UnsupportedDdl stringAction 
- Determines if an unsupported (i.e. non-default) value specified for a constraint property returns an error. For more information, check UNSUPPORTEDDDLACTION docs.
- UseCached boolResult 
- Specifies whether to reuse persisted query results, if available, when a matching query is submitted. For more information, check USECACHEDRESULT docs.
- UserTask stringManaged Initial Warehouse Size 
- Specifies the size of the compute resources to provision for the first run of the task, before a task history is available for Snowflake to determine an ideal size. Once a task has successfully completed a few runs, Snowflake ignores this parameter setting. Valid values are (case-insensitive): %s. (Conflicts with warehouse). For more information about warehouses, see docs. For more information, check USERTASKMANAGEDINITIALWAREHOUSE_SIZE docs.
- UserTask intMinimum Trigger Interval In Seconds 
- Minimum amount of time between Triggered Task executions in seconds For more information, check USERTASKMINIMUMTRIGGERINTERVALINSECONDS docs.
- UserTask intTimeout Ms 
- Specifies the time limit on a single run of the task before it times out (in milliseconds). For more information, check USERTASKTIMEOUT_MS docs.
- Warehouse string
- The warehouse the task will use. Omit this parameter to use Snowflake-managed compute resources for runs of this task. Due to Snowflake limitations warehouse identifier can consist of only upper-cased letters. (Conflicts with usertaskmanagedinitialwarehouse_size) For more information about this resource, see docs.
- WeekOf intYear Policy 
- Specifies how the weeks in a given year are computed. 0: The semantics used are equivalent to the ISO semantics, in which a week belongs to a given year if at least 4 days of that week are in that year.1: January 1 is included in the first week of the year and December 31 is included in the last week of the year. For more information, check WEEKOFYEAR_POLICY docs.
- WeekStart int
- Specifies the first day of the week (used by week-related date functions). 0: Legacy Snowflake behavior is used (i.e. ISO-like semantics).1(Monday) to7(Sunday): All the week-related functions use weeks that start on the specified day of the week. For more information, check WEEK_START docs.
- When string
- Specifies a Boolean SQL expression; multiple conditions joined with AND/OR are supported. When a task is triggered (based on its SCHEDULE or AFTER setting), it validates the conditions of the expression to determine whether to execute. If the conditions of the expression are not met, then the task skips the current run. Any tasks that identify this task as a predecessor also don’t run.
- database String
- The database in which to create the task. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- schema String
- The schema in which to create the task. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- sqlStatement String
- Any single SQL statement, or a call to a stored procedure, executed when the task runs.
- started Boolean
- Specifies if the task should be started or suspended.
- abortDetached BooleanQuery 
- Specifies the action that Snowflake performs for in-progress queries if connectivity is lost due to abrupt termination of a session (e.g. network outage, browser termination, service interruption). For more information, check ABORTDETACHEDQUERY docs.
- afters List<String>
- Specifies one or more predecessor tasks for the current task. Use this option to create a DAG of tasks or add this task to an existing DAG. A DAG is a series of tasks that starts with a scheduled root task and is linked together by dependencies. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- allowOverlapping StringExecution 
- By default, Snowflake ensures that only one instance of a particular DAG is allowed to run at a time, setting the parameter value to TRUE permits DAG runs to overlap. Available options are: "true" or "false". When the value is not set in the configuration the provider will put "default" there which means to use the Snowflake default for this value.
- autocommit Boolean
- Specifies whether autocommit is enabled for the session. Autocommit determines whether a DML statement, when executed without an active transaction, is automatically committed after the statement successfully completes. For more information, see Transactions. For more information, check AUTOCOMMIT docs.
- binaryInput StringFormat 
- The format of VARCHAR values passed as input to VARCHAR-to-BINARY conversion functions. For more information, see Binary input and output. For more information, check BINARYINPUTFORMAT docs.
- binaryOutput StringFormat 
- The format for VARCHAR values returned as output by BINARY-to-VARCHAR conversion functions. For more information, see Binary input and output. For more information, check BINARYOUTPUTFORMAT docs.
- clientMemory IntegerLimit 
- Parameter that specifies the maximum amount of memory the JDBC driver or ODBC driver should use for the result set from queries (in MB). For more information, check CLIENTMEMORYLIMIT docs.
- clientMetadata BooleanRequest Use Connection Ctx 
- For specific ODBC functions and JDBC methods, this parameter can change the default search scope from all databases/schemas to the current database/schema. The narrower search typically returns fewer rows and executes more quickly. For more information, check CLIENTMETADATAREQUESTUSECONNECTION_CTX docs.
- clientPrefetch IntegerThreads 
- Parameter that specifies the number of threads used by the client to pre-fetch large result sets. The driver will attempt to honor the parameter value, but defines the minimum and maximum values (depending on your system’s resources) to improve performance. For more information, check CLIENTPREFETCHTHREADS docs.
- clientResult IntegerChunk Size 
- Parameter that specifies the maximum size of each set (or chunk) of query results to download (in MB). The JDBC driver downloads query results in chunks. For more information, check CLIENTRESULTCHUNK_SIZE docs.
- clientResult BooleanColumn Case Insensitive 
- Parameter that indicates whether to match column name case-insensitively in ResultSet.get* methods in JDBC. For more information, check CLIENTRESULTCOLUMNCASEINSENSITIVE docs.
- clientSession BooleanKeep Alive 
- Parameter that indicates whether to force a user to log in again after a period of inactivity in the session. For more information, check CLIENTSESSIONKEEP_ALIVE docs.
- clientSession IntegerKeep Alive Heartbeat Frequency 
- Number of seconds in-between client attempts to update the token for the session. For more information, check CLIENTSESSIONKEEPALIVEHEARTBEAT_FREQUENCY docs.
- clientTimestamp StringType Mapping 
- Specifies the TIMESTAMP_* variation to use when binding timestamp variables for JDBC or ODBC applications that use the bind API to load data. For more information, check CLIENTTIMESTAMPTYPE_MAPPING docs.
- comment String
- Specifies a comment for the task.
- config String
- Specifies a string representation of key value pairs that can be accessed by all tasks in the task graph. Must be in JSON format.
- dateInput StringFormat 
- Specifies the input format for the DATE data type. For more information, see Date and time input and output formats. For more information, check DATEINPUTFORMAT docs.
- dateOutput StringFormat 
- Specifies the display format for the DATE data type. For more information, see Date and time input and output formats. For more information, check DATEOUTPUTFORMAT docs.
- enableUnload BooleanPhysical Type Optimization 
- Specifies whether to set the schema for unloaded Parquet files based on the logical column data types (i.e. the types in the unload SQL query or source table) or on the unloaded column values (i.e. the smallest data types and precision that support the values in the output columns of the unload SQL statement or source table). For more information, check ENABLEUNLOADPHYSICALTYPEOPTIMIZATION docs.
- errorIntegration String
- Specifies the name of the notification integration used for error notifications. Due to technical limitations (read more here), avoid using the following characters: |,.,". For more information about this resource, see docs.
- errorOn BooleanNondeterministic Merge 
- Specifies whether to return an error when the MERGE command is used to update or delete a target row that joins multiple source rows and the system cannot determine the action to perform on the target row. For more information, check ERRORONNONDETERMINISTIC_MERGE docs.
- errorOn BooleanNondeterministic Update 
- Specifies whether to return an error when the UPDATE command is used to update a target row that joins multiple source rows and the system cannot determine the action to perform on the target row. For more information, check ERRORONNONDETERMINISTIC_UPDATE docs.
- finalize_ String
- Specifies the name of a root task that the finalizer task is associated with. Finalizer tasks run after all other tasks in the task graph run to completion. You can define the SQL of a finalizer task to handle notifications and the release and cleanup of resources that a task graph uses. For more information, see Release and cleanup of task graphs. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- geographyOutput StringFormat 
- Display format for GEOGRAPHY values. For more information, check GEOGRAPHYOUTPUTFORMAT docs.
- geometryOutput StringFormat 
- Display format for GEOMETRY values. For more information, check GEOMETRYOUTPUTFORMAT docs.
- jdbcTreat BooleanTimestamp Ntz As Utc 
- Specifies how JDBC processes TIMESTAMPNTZ values. For more information, check TREATTIMESTAMPNTZASUTC docsJDBC.
- jdbcUse BooleanSession Timezone 
- Specifies whether the JDBC Driver uses the time zone of the JVM or the time zone of the session (specified by the TIMEZONE parameter) for the getDate(), getTime(), and getTimestamp() methods of the ResultSet class. For more information, check JDBCUSESESSION_TIMEZONE docs.
- jsonIndent Integer
- Specifies the number of blank spaces to indent each new element in JSON output in the session. Also specifies whether to insert newline characters after each element. For more information, check JSON_INDENT docs.
- lockTimeout Integer
- Number of seconds to wait while trying to lock a resource, before timing out and aborting the statement. For more information, check LOCK_TIMEOUT docs.
- logLevel String
- Specifies the severity level of messages that should be ingested and made available in the active event table. Messages at the specified level (and at more severe levels) are ingested. For more information about log levels, see Setting log level. For more information, check LOG_LEVEL docs.
- multiStatement IntegerCount 
- Number of statements to execute when using the multi-statement capability. For more information, check MULTISTATEMENTCOUNT docs.
- name String
- Specifies the identifier for the task; must be unique for the database and schema in which the task is created. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- noorderSequence BooleanAs Default 
- Specifies whether the ORDER or NOORDER property is set by default when you create a new sequence or add a new table column. The ORDER and NOORDER properties determine whether or not the values are generated for the sequence or auto-incremented column in increasing or decreasing order. For more information, check NOORDERSEQUENCEAS_DEFAULT docs.
- odbcTreat BooleanDecimal As Int 
- Specifies how ODBC processes columns that have a scale of zero (0). For more information, check ODBCTREATDECIMALASINT docs.
- queryTag String
- Optional string that can be used to tag queries and other SQL statements executed within a session. The tags are displayed in the output of the QUERYHISTORY, QUERYHISTORY*BY** functions. For more information, check QUERY_TAG docs.
- quotedIdentifiers BooleanIgnore Case 
- Specifies whether letters in double-quoted object identifiers are stored and resolved as uppercase letters. By default, Snowflake preserves the case of alphabetic characters when storing and resolving double-quoted identifiers (see Identifier resolution). You can use this parameter in situations in which third-party applications always use double quotes around identifiers. For more information, check QUOTEDIDENTIFIERSIGNORE_CASE docs.
- rowsPer IntegerResultset 
- Specifies the maximum number of rows returned in a result set. A value of 0 specifies no maximum. For more information, check ROWSPERRESULTSET docs.
- s3StageVpce StringDns Name 
- Specifies the DNS name of an Amazon S3 interface endpoint. Requests sent to the internal stage of an account via AWS PrivateLink for Amazon S3 use this endpoint to connect. For more information, see Accessing Internal stages with dedicated interface endpoints. For more information, check S3STAGEVPCEDNSNAME docs.
- schedule
TaskSchedule 
- The schedule for periodically running the task. This can be a cron or interval in minutes. (Conflicts with finalize and after; when set, one of the sub-fields minutesorusing_cronshould be set)
- searchPath String
- Specifies the path to search to resolve unqualified object names in queries. For more information, see Name resolution in queries. Comma-separated list of identifiers. An identifier can be a fully or partially qualified schema name. For more information, check SEARCH_PATH docs.
- statementQueued IntegerTimeout In Seconds 
- Amount of time, in seconds, a SQL statement (query, DDL, DML, etc.) remains queued for a warehouse before it is canceled by the system. This parameter can be used in conjunction with the MAXCONCURRENCYLEVEL parameter to ensure a warehouse is never backlogged. For more information, check STATEMENTQUEUEDTIMEOUTINSECONDS docs.
- statementTimeout IntegerIn Seconds 
- Amount of time, in seconds, after which a running SQL statement (query, DDL, DML, etc.) is canceled by the system. For more information, check STATEMENTTIMEOUTIN_SECONDS docs.
- strictJson BooleanOutput 
- This parameter specifies whether JSON output in a session is compatible with the general standard (as described by http://json.org). By design, Snowflake allows JSON input that contains non-standard values; however, these non-standard values might result in Snowflake outputting JSON that is incompatible with other platforms and languages. This parameter, when enabled, ensures that Snowflake outputs valid/compatible JSON. For more information, check STRICTJSONOUTPUT docs.
- suspendTask IntegerAfter Num Failures 
- Specifies the number of consecutive failed task runs after which the current task is suspended automatically. The default is 0 (no automatic suspension). For more information, check SUSPENDTASKAFTERNUMFAILURES docs.
- taskAuto IntegerRetry Attempts 
- Specifies the number of automatic task graph retry attempts. If any task graphs complete in a FAILED state, Snowflake can automatically retry the task graphs from the last task in the graph that failed. For more information, check TASKAUTORETRY_ATTEMPTS docs.
- timeInput StringFormat 
- Specifies the input format for the TIME data type. For more information, see Date and time input and output formats. Any valid, supported time format or AUTO (AUTO specifies that Snowflake attempts to automatically detect the format of times stored in the system during the session). For more information, check TIMEINPUTFORMAT docs.
- timeOutput StringFormat 
- Specifies the display format for the TIME data type. For more information, see Date and time input and output formats. For more information, check TIMEOUTPUTFORMAT docs.
- timestampDay BooleanIs Always24h 
- Specifies whether the DATEADD function (and its aliases) always consider a day to be exactly 24 hours for expressions that span multiple days. For more information, check TIMESTAMPDAYISALWAYS24H docs.
- timestampInput StringFormat 
- Specifies the input format for the TIMESTAMP data type alias. For more information, see Date and time input and output formats. Any valid, supported timestamp format or AUTO (AUTO specifies that Snowflake attempts to automatically detect the format of timestamps stored in the system during the session). For more information, check TIMESTAMPINPUTFORMAT docs.
- timestampLtz StringOutput Format 
- Specifies the display format for the TIMESTAMPLTZ data type. If no format is specified, defaults to OUTPUT*FORMATTIMESTAMP. For more information, see Date and time input and output formats. For more information, check TIMESTAMPLTZOUTPUT*FORMAT docs.
- timestampNtz StringOutput Format 
- Specifies the display format for the TIMESTAMPNTZ data type. For more information, check NTZOUTPUTFORMAT docsTIMESTAMP.
- timestampOutput StringFormat 
- Specifies the display format for the TIMESTAMP data type alias. For more information, see Date and time input and output formats. For more information, check TIMESTAMPOUTPUTFORMAT docs.
- timestampType StringMapping 
- Specifies the TIMESTAMP** variation that the TIMESTAMP data type alias maps to. For more information, check TIMESTAMP*TYPE_MAPPING docs.
- timestampTz StringOutput Format 
- Specifies the display format for the TIMESTAMPTZ data type. If no format is specified, defaults to OUTPUT*FORMATTIMESTAMP. For more information, see Date and time input and output formats. For more information, check TIMESTAMPTZOUTPUT*FORMAT docs.
- timezone String
- Specifies the time zone for the session. You can specify a time zone name or a link name from release 2021a of the IANA Time Zone Database (e.g. America/Los_Angeles, Europe/London, UTC, Etc/GMT, etc.). For more information, check TIMEZONE docs.
- traceLevel String
- Controls how trace events are ingested into the event table. For more information about trace levels, see Setting trace level. For more information, check TRACE_LEVEL docs.
- transactionAbort BooleanOn Error 
- Specifies the action to perform when a statement issued within a non-autocommit transaction returns with an error. For more information, check TRANSACTIONABORTON_ERROR docs.
- transactionDefault StringIsolation Level 
- Specifies the isolation level for transactions in the user session. For more information, check TRANSACTIONDEFAULTISOLATION_LEVEL docs.
- twoDigit IntegerCentury Start 
- Specifies the “century start” year for 2-digit years (i.e. the earliest year such dates can represent). This parameter prevents ambiguous dates when importing or converting data with the YYdate format component (i.e. years represented as 2 digits). For more information, check TWODIGITCENTURY_START docs.
- unsupportedDdl StringAction 
- Determines if an unsupported (i.e. non-default) value specified for a constraint property returns an error. For more information, check UNSUPPORTEDDDLACTION docs.
- useCached BooleanResult 
- Specifies whether to reuse persisted query results, if available, when a matching query is submitted. For more information, check USECACHEDRESULT docs.
- userTask StringManaged Initial Warehouse Size 
- Specifies the size of the compute resources to provision for the first run of the task, before a task history is available for Snowflake to determine an ideal size. Once a task has successfully completed a few runs, Snowflake ignores this parameter setting. Valid values are (case-insensitive): %s. (Conflicts with warehouse). For more information about warehouses, see docs. For more information, check USERTASKMANAGEDINITIALWAREHOUSE_SIZE docs.
- userTask IntegerMinimum Trigger Interval In Seconds 
- Minimum amount of time between Triggered Task executions in seconds For more information, check USERTASKMINIMUMTRIGGERINTERVALINSECONDS docs.
- userTask IntegerTimeout Ms 
- Specifies the time limit on a single run of the task before it times out (in milliseconds). For more information, check USERTASKTIMEOUT_MS docs.
- warehouse String
- The warehouse the task will use. Omit this parameter to use Snowflake-managed compute resources for runs of this task. Due to Snowflake limitations warehouse identifier can consist of only upper-cased letters. (Conflicts with usertaskmanagedinitialwarehouse_size) For more information about this resource, see docs.
- weekOf IntegerYear Policy 
- Specifies how the weeks in a given year are computed. 0: The semantics used are equivalent to the ISO semantics, in which a week belongs to a given year if at least 4 days of that week are in that year.1: January 1 is included in the first week of the year and December 31 is included in the last week of the year. For more information, check WEEKOFYEAR_POLICY docs.
- weekStart Integer
- Specifies the first day of the week (used by week-related date functions). 0: Legacy Snowflake behavior is used (i.e. ISO-like semantics).1(Monday) to7(Sunday): All the week-related functions use weeks that start on the specified day of the week. For more information, check WEEK_START docs.
- when String
- Specifies a Boolean SQL expression; multiple conditions joined with AND/OR are supported. When a task is triggered (based on its SCHEDULE or AFTER setting), it validates the conditions of the expression to determine whether to execute. If the conditions of the expression are not met, then the task skips the current run. Any tasks that identify this task as a predecessor also don’t run.
- database string
- The database in which to create the task. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- schema string
- The schema in which to create the task. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- sqlStatement string
- Any single SQL statement, or a call to a stored procedure, executed when the task runs.
- started boolean
- Specifies if the task should be started or suspended.
- abortDetached booleanQuery 
- Specifies the action that Snowflake performs for in-progress queries if connectivity is lost due to abrupt termination of a session (e.g. network outage, browser termination, service interruption). For more information, check ABORTDETACHEDQUERY docs.
- afters string[]
- Specifies one or more predecessor tasks for the current task. Use this option to create a DAG of tasks or add this task to an existing DAG. A DAG is a series of tasks that starts with a scheduled root task and is linked together by dependencies. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- allowOverlapping stringExecution 
- By default, Snowflake ensures that only one instance of a particular DAG is allowed to run at a time, setting the parameter value to TRUE permits DAG runs to overlap. Available options are: "true" or "false". When the value is not set in the configuration the provider will put "default" there which means to use the Snowflake default for this value.
- autocommit boolean
- Specifies whether autocommit is enabled for the session. Autocommit determines whether a DML statement, when executed without an active transaction, is automatically committed after the statement successfully completes. For more information, see Transactions. For more information, check AUTOCOMMIT docs.
- binaryInput stringFormat 
- The format of VARCHAR values passed as input to VARCHAR-to-BINARY conversion functions. For more information, see Binary input and output. For more information, check BINARYINPUTFORMAT docs.
- binaryOutput stringFormat 
- The format for VARCHAR values returned as output by BINARY-to-VARCHAR conversion functions. For more information, see Binary input and output. For more information, check BINARYOUTPUTFORMAT docs.
- clientMemory numberLimit 
- Parameter that specifies the maximum amount of memory the JDBC driver or ODBC driver should use for the result set from queries (in MB). For more information, check CLIENTMEMORYLIMIT docs.
- clientMetadata booleanRequest Use Connection Ctx 
- For specific ODBC functions and JDBC methods, this parameter can change the default search scope from all databases/schemas to the current database/schema. The narrower search typically returns fewer rows and executes more quickly. For more information, check CLIENTMETADATAREQUESTUSECONNECTION_CTX docs.
- clientPrefetch numberThreads 
- Parameter that specifies the number of threads used by the client to pre-fetch large result sets. The driver will attempt to honor the parameter value, but defines the minimum and maximum values (depending on your system’s resources) to improve performance. For more information, check CLIENTPREFETCHTHREADS docs.
- clientResult numberChunk Size 
- Parameter that specifies the maximum size of each set (or chunk) of query results to download (in MB). The JDBC driver downloads query results in chunks. For more information, check CLIENTRESULTCHUNK_SIZE docs.
- clientResult booleanColumn Case Insensitive 
- Parameter that indicates whether to match column name case-insensitively in ResultSet.get* methods in JDBC. For more information, check CLIENTRESULTCOLUMNCASEINSENSITIVE docs.
- clientSession booleanKeep Alive 
- Parameter that indicates whether to force a user to log in again after a period of inactivity in the session. For more information, check CLIENTSESSIONKEEP_ALIVE docs.
- clientSession numberKeep Alive Heartbeat Frequency 
- Number of seconds in-between client attempts to update the token for the session. For more information, check CLIENTSESSIONKEEPALIVEHEARTBEAT_FREQUENCY docs.
- clientTimestamp stringType Mapping 
- Specifies the TIMESTAMP_* variation to use when binding timestamp variables for JDBC or ODBC applications that use the bind API to load data. For more information, check CLIENTTIMESTAMPTYPE_MAPPING docs.
- comment string
- Specifies a comment for the task.
- config string
- Specifies a string representation of key value pairs that can be accessed by all tasks in the task graph. Must be in JSON format.
- dateInput stringFormat 
- Specifies the input format for the DATE data type. For more information, see Date and time input and output formats. For more information, check DATEINPUTFORMAT docs.
- dateOutput stringFormat 
- Specifies the display format for the DATE data type. For more information, see Date and time input and output formats. For more information, check DATEOUTPUTFORMAT docs.
- enableUnload booleanPhysical Type Optimization 
- Specifies whether to set the schema for unloaded Parquet files based on the logical column data types (i.e. the types in the unload SQL query or source table) or on the unloaded column values (i.e. the smallest data types and precision that support the values in the output columns of the unload SQL statement or source table). For more information, check ENABLEUNLOADPHYSICALTYPEOPTIMIZATION docs.
- errorIntegration string
- Specifies the name of the notification integration used for error notifications. Due to technical limitations (read more here), avoid using the following characters: |,.,". For more information about this resource, see docs.
- errorOn booleanNondeterministic Merge 
- Specifies whether to return an error when the MERGE command is used to update or delete a target row that joins multiple source rows and the system cannot determine the action to perform on the target row. For more information, check ERRORONNONDETERMINISTIC_MERGE docs.
- errorOn booleanNondeterministic Update 
- Specifies whether to return an error when the UPDATE command is used to update a target row that joins multiple source rows and the system cannot determine the action to perform on the target row. For more information, check ERRORONNONDETERMINISTIC_UPDATE docs.
- finalize string
- Specifies the name of a root task that the finalizer task is associated with. Finalizer tasks run after all other tasks in the task graph run to completion. You can define the SQL of a finalizer task to handle notifications and the release and cleanup of resources that a task graph uses. For more information, see Release and cleanup of task graphs. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- geographyOutput stringFormat 
- Display format for GEOGRAPHY values. For more information, check GEOGRAPHYOUTPUTFORMAT docs.
- geometryOutput stringFormat 
- Display format for GEOMETRY values. For more information, check GEOMETRYOUTPUTFORMAT docs.
- jdbcTreat booleanTimestamp Ntz As Utc 
- Specifies how JDBC processes TIMESTAMPNTZ values. For more information, check TREATTIMESTAMPNTZASUTC docsJDBC.
- jdbcUse booleanSession Timezone 
- Specifies whether the JDBC Driver uses the time zone of the JVM or the time zone of the session (specified by the TIMEZONE parameter) for the getDate(), getTime(), and getTimestamp() methods of the ResultSet class. For more information, check JDBCUSESESSION_TIMEZONE docs.
- jsonIndent number
- Specifies the number of blank spaces to indent each new element in JSON output in the session. Also specifies whether to insert newline characters after each element. For more information, check JSON_INDENT docs.
- lockTimeout number
- Number of seconds to wait while trying to lock a resource, before timing out and aborting the statement. For more information, check LOCK_TIMEOUT docs.
- logLevel string
- Specifies the severity level of messages that should be ingested and made available in the active event table. Messages at the specified level (and at more severe levels) are ingested. For more information about log levels, see Setting log level. For more information, check LOG_LEVEL docs.
- multiStatement numberCount 
- Number of statements to execute when using the multi-statement capability. For more information, check MULTISTATEMENTCOUNT docs.
- name string
- Specifies the identifier for the task; must be unique for the database and schema in which the task is created. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- noorderSequence booleanAs Default 
- Specifies whether the ORDER or NOORDER property is set by default when you create a new sequence or add a new table column. The ORDER and NOORDER properties determine whether or not the values are generated for the sequence or auto-incremented column in increasing or decreasing order. For more information, check NOORDERSEQUENCEAS_DEFAULT docs.
- odbcTreat booleanDecimal As Int 
- Specifies how ODBC processes columns that have a scale of zero (0). For more information, check ODBCTREATDECIMALASINT docs.
- queryTag string
- Optional string that can be used to tag queries and other SQL statements executed within a session. The tags are displayed in the output of the QUERYHISTORY, QUERYHISTORY*BY** functions. For more information, check QUERY_TAG docs.
- quotedIdentifiers booleanIgnore Case 
- Specifies whether letters in double-quoted object identifiers are stored and resolved as uppercase letters. By default, Snowflake preserves the case of alphabetic characters when storing and resolving double-quoted identifiers (see Identifier resolution). You can use this parameter in situations in which third-party applications always use double quotes around identifiers. For more information, check QUOTEDIDENTIFIERSIGNORE_CASE docs.
- rowsPer numberResultset 
- Specifies the maximum number of rows returned in a result set. A value of 0 specifies no maximum. For more information, check ROWSPERRESULTSET docs.
- s3StageVpce stringDns Name 
- Specifies the DNS name of an Amazon S3 interface endpoint. Requests sent to the internal stage of an account via AWS PrivateLink for Amazon S3 use this endpoint to connect. For more information, see Accessing Internal stages with dedicated interface endpoints. For more information, check S3STAGEVPCEDNSNAME docs.
- schedule
TaskSchedule 
- The schedule for periodically running the task. This can be a cron or interval in minutes. (Conflicts with finalize and after; when set, one of the sub-fields minutesorusing_cronshould be set)
- searchPath string
- Specifies the path to search to resolve unqualified object names in queries. For more information, see Name resolution in queries. Comma-separated list of identifiers. An identifier can be a fully or partially qualified schema name. For more information, check SEARCH_PATH docs.
- statementQueued numberTimeout In Seconds 
- Amount of time, in seconds, a SQL statement (query, DDL, DML, etc.) remains queued for a warehouse before it is canceled by the system. This parameter can be used in conjunction with the MAXCONCURRENCYLEVEL parameter to ensure a warehouse is never backlogged. For more information, check STATEMENTQUEUEDTIMEOUTINSECONDS docs.
- statementTimeout numberIn Seconds 
- Amount of time, in seconds, after which a running SQL statement (query, DDL, DML, etc.) is canceled by the system. For more information, check STATEMENTTIMEOUTIN_SECONDS docs.
- strictJson booleanOutput 
- This parameter specifies whether JSON output in a session is compatible with the general standard (as described by http://json.org). By design, Snowflake allows JSON input that contains non-standard values; however, these non-standard values might result in Snowflake outputting JSON that is incompatible with other platforms and languages. This parameter, when enabled, ensures that Snowflake outputs valid/compatible JSON. For more information, check STRICTJSONOUTPUT docs.
- suspendTask numberAfter Num Failures 
- Specifies the number of consecutive failed task runs after which the current task is suspended automatically. The default is 0 (no automatic suspension). For more information, check SUSPENDTASKAFTERNUMFAILURES docs.
- taskAuto numberRetry Attempts 
- Specifies the number of automatic task graph retry attempts. If any task graphs complete in a FAILED state, Snowflake can automatically retry the task graphs from the last task in the graph that failed. For more information, check TASKAUTORETRY_ATTEMPTS docs.
- timeInput stringFormat 
- Specifies the input format for the TIME data type. For more information, see Date and time input and output formats. Any valid, supported time format or AUTO (AUTO specifies that Snowflake attempts to automatically detect the format of times stored in the system during the session). For more information, check TIMEINPUTFORMAT docs.
- timeOutput stringFormat 
- Specifies the display format for the TIME data type. For more information, see Date and time input and output formats. For more information, check TIMEOUTPUTFORMAT docs.
- timestampDay booleanIs Always24h 
- Specifies whether the DATEADD function (and its aliases) always consider a day to be exactly 24 hours for expressions that span multiple days. For more information, check TIMESTAMPDAYISALWAYS24H docs.
- timestampInput stringFormat 
- Specifies the input format for the TIMESTAMP data type alias. For more information, see Date and time input and output formats. Any valid, supported timestamp format or AUTO (AUTO specifies that Snowflake attempts to automatically detect the format of timestamps stored in the system during the session). For more information, check TIMESTAMPINPUTFORMAT docs.
- timestampLtz stringOutput Format 
- Specifies the display format for the TIMESTAMPLTZ data type. If no format is specified, defaults to OUTPUT*FORMATTIMESTAMP. For more information, see Date and time input and output formats. For more information, check TIMESTAMPLTZOUTPUT*FORMAT docs.
- timestampNtz stringOutput Format 
- Specifies the display format for the TIMESTAMPNTZ data type. For more information, check NTZOUTPUTFORMAT docsTIMESTAMP.
- timestampOutput stringFormat 
- Specifies the display format for the TIMESTAMP data type alias. For more information, see Date and time input and output formats. For more information, check TIMESTAMPOUTPUTFORMAT docs.
- timestampType stringMapping 
- Specifies the TIMESTAMP** variation that the TIMESTAMP data type alias maps to. For more information, check TIMESTAMP*TYPE_MAPPING docs.
- timestampTz stringOutput Format 
- Specifies the display format for the TIMESTAMPTZ data type. If no format is specified, defaults to OUTPUT*FORMATTIMESTAMP. For more information, see Date and time input and output formats. For more information, check TIMESTAMPTZOUTPUT*FORMAT docs.
- timezone string
- Specifies the time zone for the session. You can specify a time zone name or a link name from release 2021a of the IANA Time Zone Database (e.g. America/Los_Angeles, Europe/London, UTC, Etc/GMT, etc.). For more information, check TIMEZONE docs.
- traceLevel string
- Controls how trace events are ingested into the event table. For more information about trace levels, see Setting trace level. For more information, check TRACE_LEVEL docs.
- transactionAbort booleanOn Error 
- Specifies the action to perform when a statement issued within a non-autocommit transaction returns with an error. For more information, check TRANSACTIONABORTON_ERROR docs.
- transactionDefault stringIsolation Level 
- Specifies the isolation level for transactions in the user session. For more information, check TRANSACTIONDEFAULTISOLATION_LEVEL docs.
- twoDigit numberCentury Start 
- Specifies the “century start” year for 2-digit years (i.e. the earliest year such dates can represent). This parameter prevents ambiguous dates when importing or converting data with the YYdate format component (i.e. years represented as 2 digits). For more information, check TWODIGITCENTURY_START docs.
- unsupportedDdl stringAction 
- Determines if an unsupported (i.e. non-default) value specified for a constraint property returns an error. For more information, check UNSUPPORTEDDDLACTION docs.
- useCached booleanResult 
- Specifies whether to reuse persisted query results, if available, when a matching query is submitted. For more information, check USECACHEDRESULT docs.
- userTask stringManaged Initial Warehouse Size 
- Specifies the size of the compute resources to provision for the first run of the task, before a task history is available for Snowflake to determine an ideal size. Once a task has successfully completed a few runs, Snowflake ignores this parameter setting. Valid values are (case-insensitive): %s. (Conflicts with warehouse). For more information about warehouses, see docs. For more information, check USERTASKMANAGEDINITIALWAREHOUSE_SIZE docs.
- userTask numberMinimum Trigger Interval In Seconds 
- Minimum amount of time between Triggered Task executions in seconds For more information, check USERTASKMINIMUMTRIGGERINTERVALINSECONDS docs.
- userTask numberTimeout Ms 
- Specifies the time limit on a single run of the task before it times out (in milliseconds). For more information, check USERTASKTIMEOUT_MS docs.
- warehouse string
- The warehouse the task will use. Omit this parameter to use Snowflake-managed compute resources for runs of this task. Due to Snowflake limitations warehouse identifier can consist of only upper-cased letters. (Conflicts with usertaskmanagedinitialwarehouse_size) For more information about this resource, see docs.
- weekOf numberYear Policy 
- Specifies how the weeks in a given year are computed. 0: The semantics used are equivalent to the ISO semantics, in which a week belongs to a given year if at least 4 days of that week are in that year.1: January 1 is included in the first week of the year and December 31 is included in the last week of the year. For more information, check WEEKOFYEAR_POLICY docs.
- weekStart number
- Specifies the first day of the week (used by week-related date functions). 0: Legacy Snowflake behavior is used (i.e. ISO-like semantics).1(Monday) to7(Sunday): All the week-related functions use weeks that start on the specified day of the week. For more information, check WEEK_START docs.
- when string
- Specifies a Boolean SQL expression; multiple conditions joined with AND/OR are supported. When a task is triggered (based on its SCHEDULE or AFTER setting), it validates the conditions of the expression to determine whether to execute. If the conditions of the expression are not met, then the task skips the current run. Any tasks that identify this task as a predecessor also don’t run.
- database str
- The database in which to create the task. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- schema str
- The schema in which to create the task. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- sql_statement str
- Any single SQL statement, or a call to a stored procedure, executed when the task runs.
- started bool
- Specifies if the task should be started or suspended.
- abort_detached_ boolquery 
- Specifies the action that Snowflake performs for in-progress queries if connectivity is lost due to abrupt termination of a session (e.g. network outage, browser termination, service interruption). For more information, check ABORTDETACHEDQUERY docs.
- afters Sequence[str]
- Specifies one or more predecessor tasks for the current task. Use this option to create a DAG of tasks or add this task to an existing DAG. A DAG is a series of tasks that starts with a scheduled root task and is linked together by dependencies. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- allow_overlapping_ strexecution 
- By default, Snowflake ensures that only one instance of a particular DAG is allowed to run at a time, setting the parameter value to TRUE permits DAG runs to overlap. Available options are: "true" or "false". When the value is not set in the configuration the provider will put "default" there which means to use the Snowflake default for this value.
- autocommit bool
- Specifies whether autocommit is enabled for the session. Autocommit determines whether a DML statement, when executed without an active transaction, is automatically committed after the statement successfully completes. For more information, see Transactions. For more information, check AUTOCOMMIT docs.
- binary_input_ strformat 
- The format of VARCHAR values passed as input to VARCHAR-to-BINARY conversion functions. For more information, see Binary input and output. For more information, check BINARYINPUTFORMAT docs.
- binary_output_ strformat 
- The format for VARCHAR values returned as output by BINARY-to-VARCHAR conversion functions. For more information, see Binary input and output. For more information, check BINARYOUTPUTFORMAT docs.
- client_memory_ intlimit 
- Parameter that specifies the maximum amount of memory the JDBC driver or ODBC driver should use for the result set from queries (in MB). For more information, check CLIENTMEMORYLIMIT docs.
- client_metadata_ boolrequest_ use_ connection_ ctx 
- For specific ODBC functions and JDBC methods, this parameter can change the default search scope from all databases/schemas to the current database/schema. The narrower search typically returns fewer rows and executes more quickly. For more information, check CLIENTMETADATAREQUESTUSECONNECTION_CTX docs.
- client_prefetch_ intthreads 
- Parameter that specifies the number of threads used by the client to pre-fetch large result sets. The driver will attempt to honor the parameter value, but defines the minimum and maximum values (depending on your system’s resources) to improve performance. For more information, check CLIENTPREFETCHTHREADS docs.
- client_result_ intchunk_ size 
- Parameter that specifies the maximum size of each set (or chunk) of query results to download (in MB). The JDBC driver downloads query results in chunks. For more information, check CLIENTRESULTCHUNK_SIZE docs.
- client_result_ boolcolumn_ case_ insensitive 
- Parameter that indicates whether to match column name case-insensitively in ResultSet.get* methods in JDBC. For more information, check CLIENTRESULTCOLUMNCASEINSENSITIVE docs.
- client_session_ boolkeep_ alive 
- Parameter that indicates whether to force a user to log in again after a period of inactivity in the session. For more information, check CLIENTSESSIONKEEP_ALIVE docs.
- client_session_ intkeep_ alive_ heartbeat_ frequency 
- Number of seconds in-between client attempts to update the token for the session. For more information, check CLIENTSESSIONKEEPALIVEHEARTBEAT_FREQUENCY docs.
- client_timestamp_ strtype_ mapping 
- Specifies the TIMESTAMP_* variation to use when binding timestamp variables for JDBC or ODBC applications that use the bind API to load data. For more information, check CLIENTTIMESTAMPTYPE_MAPPING docs.
- comment str
- Specifies a comment for the task.
- config str
- Specifies a string representation of key value pairs that can be accessed by all tasks in the task graph. Must be in JSON format.
- date_input_ strformat 
- Specifies the input format for the DATE data type. For more information, see Date and time input and output formats. For more information, check DATEINPUTFORMAT docs.
- date_output_ strformat 
- Specifies the display format for the DATE data type. For more information, see Date and time input and output formats. For more information, check DATEOUTPUTFORMAT docs.
- enable_unload_ boolphysical_ type_ optimization 
- Specifies whether to set the schema for unloaded Parquet files based on the logical column data types (i.e. the types in the unload SQL query or source table) or on the unloaded column values (i.e. the smallest data types and precision that support the values in the output columns of the unload SQL statement or source table). For more information, check ENABLEUNLOADPHYSICALTYPEOPTIMIZATION docs.
- error_integration str
- Specifies the name of the notification integration used for error notifications. Due to technical limitations (read more here), avoid using the following characters: |,.,". For more information about this resource, see docs.
- error_on_ boolnondeterministic_ merge 
- Specifies whether to return an error when the MERGE command is used to update or delete a target row that joins multiple source rows and the system cannot determine the action to perform on the target row. For more information, check ERRORONNONDETERMINISTIC_MERGE docs.
- error_on_ boolnondeterministic_ update 
- Specifies whether to return an error when the UPDATE command is used to update a target row that joins multiple source rows and the system cannot determine the action to perform on the target row. For more information, check ERRORONNONDETERMINISTIC_UPDATE docs.
- finalize str
- Specifies the name of a root task that the finalizer task is associated with. Finalizer tasks run after all other tasks in the task graph run to completion. You can define the SQL of a finalizer task to handle notifications and the release and cleanup of resources that a task graph uses. For more information, see Release and cleanup of task graphs. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- geography_output_ strformat 
- Display format for GEOGRAPHY values. For more information, check GEOGRAPHYOUTPUTFORMAT docs.
- geometry_output_ strformat 
- Display format for GEOMETRY values. For more information, check GEOMETRYOUTPUTFORMAT docs.
- jdbc_treat_ booltimestamp_ ntz_ as_ utc 
- Specifies how JDBC processes TIMESTAMPNTZ values. For more information, check TREATTIMESTAMPNTZASUTC docsJDBC.
- jdbc_use_ boolsession_ timezone 
- Specifies whether the JDBC Driver uses the time zone of the JVM or the time zone of the session (specified by the TIMEZONE parameter) for the getDate(), getTime(), and getTimestamp() methods of the ResultSet class. For more information, check JDBCUSESESSION_TIMEZONE docs.
- json_indent int
- Specifies the number of blank spaces to indent each new element in JSON output in the session. Also specifies whether to insert newline characters after each element. For more information, check JSON_INDENT docs.
- lock_timeout int
- Number of seconds to wait while trying to lock a resource, before timing out and aborting the statement. For more information, check LOCK_TIMEOUT docs.
- log_level str
- Specifies the severity level of messages that should be ingested and made available in the active event table. Messages at the specified level (and at more severe levels) are ingested. For more information about log levels, see Setting log level. For more information, check LOG_LEVEL docs.
- multi_statement_ intcount 
- Number of statements to execute when using the multi-statement capability. For more information, check MULTISTATEMENTCOUNT docs.
- name str
- Specifies the identifier for the task; must be unique for the database and schema in which the task is created. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- noorder_sequence_ boolas_ default 
- Specifies whether the ORDER or NOORDER property is set by default when you create a new sequence or add a new table column. The ORDER and NOORDER properties determine whether or not the values are generated for the sequence or auto-incremented column in increasing or decreasing order. For more information, check NOORDERSEQUENCEAS_DEFAULT docs.
- odbc_treat_ booldecimal_ as_ int 
- Specifies how ODBC processes columns that have a scale of zero (0). For more information, check ODBCTREATDECIMALASINT docs.
- query_tag str
- Optional string that can be used to tag queries and other SQL statements executed within a session. The tags are displayed in the output of the QUERYHISTORY, QUERYHISTORY*BY** functions. For more information, check QUERY_TAG docs.
- quoted_identifiers_ boolignore_ case 
- Specifies whether letters in double-quoted object identifiers are stored and resolved as uppercase letters. By default, Snowflake preserves the case of alphabetic characters when storing and resolving double-quoted identifiers (see Identifier resolution). You can use this parameter in situations in which third-party applications always use double quotes around identifiers. For more information, check QUOTEDIDENTIFIERSIGNORE_CASE docs.
- rows_per_ intresultset 
- Specifies the maximum number of rows returned in a result set. A value of 0 specifies no maximum. For more information, check ROWSPERRESULTSET docs.
- s3_stage_ strvpce_ dns_ name 
- Specifies the DNS name of an Amazon S3 interface endpoint. Requests sent to the internal stage of an account via AWS PrivateLink for Amazon S3 use this endpoint to connect. For more information, see Accessing Internal stages with dedicated interface endpoints. For more information, check S3STAGEVPCEDNSNAME docs.
- schedule
TaskSchedule Args 
- The schedule for periodically running the task. This can be a cron or interval in minutes. (Conflicts with finalize and after; when set, one of the sub-fields minutesorusing_cronshould be set)
- search_path str
- Specifies the path to search to resolve unqualified object names in queries. For more information, see Name resolution in queries. Comma-separated list of identifiers. An identifier can be a fully or partially qualified schema name. For more information, check SEARCH_PATH docs.
- statement_queued_ inttimeout_ in_ seconds 
- Amount of time, in seconds, a SQL statement (query, DDL, DML, etc.) remains queued for a warehouse before it is canceled by the system. This parameter can be used in conjunction with the MAXCONCURRENCYLEVEL parameter to ensure a warehouse is never backlogged. For more information, check STATEMENTQUEUEDTIMEOUTINSECONDS docs.
- statement_timeout_ intin_ seconds 
- Amount of time, in seconds, after which a running SQL statement (query, DDL, DML, etc.) is canceled by the system. For more information, check STATEMENTTIMEOUTIN_SECONDS docs.
- strict_json_ booloutput 
- This parameter specifies whether JSON output in a session is compatible with the general standard (as described by http://json.org). By design, Snowflake allows JSON input that contains non-standard values; however, these non-standard values might result in Snowflake outputting JSON that is incompatible with other platforms and languages. This parameter, when enabled, ensures that Snowflake outputs valid/compatible JSON. For more information, check STRICTJSONOUTPUT docs.
- suspend_task_ intafter_ num_ failures 
- Specifies the number of consecutive failed task runs after which the current task is suspended automatically. The default is 0 (no automatic suspension). For more information, check SUSPENDTASKAFTERNUMFAILURES docs.
- task_auto_ intretry_ attempts 
- Specifies the number of automatic task graph retry attempts. If any task graphs complete in a FAILED state, Snowflake can automatically retry the task graphs from the last task in the graph that failed. For more information, check TASKAUTORETRY_ATTEMPTS docs.
- time_input_ strformat 
- Specifies the input format for the TIME data type. For more information, see Date and time input and output formats. Any valid, supported time format or AUTO (AUTO specifies that Snowflake attempts to automatically detect the format of times stored in the system during the session). For more information, check TIMEINPUTFORMAT docs.
- time_output_ strformat 
- Specifies the display format for the TIME data type. For more information, see Date and time input and output formats. For more information, check TIMEOUTPUTFORMAT docs.
- timestamp_day_ boolis_ always24h 
- Specifies whether the DATEADD function (and its aliases) always consider a day to be exactly 24 hours for expressions that span multiple days. For more information, check TIMESTAMPDAYISALWAYS24H docs.
- timestamp_input_ strformat 
- Specifies the input format for the TIMESTAMP data type alias. For more information, see Date and time input and output formats. Any valid, supported timestamp format or AUTO (AUTO specifies that Snowflake attempts to automatically detect the format of timestamps stored in the system during the session). For more information, check TIMESTAMPINPUTFORMAT docs.
- timestamp_ltz_ stroutput_ format 
- Specifies the display format for the TIMESTAMPLTZ data type. If no format is specified, defaults to OUTPUT*FORMATTIMESTAMP. For more information, see Date and time input and output formats. For more information, check TIMESTAMPLTZOUTPUT*FORMAT docs.
- timestamp_ntz_ stroutput_ format 
- Specifies the display format for the TIMESTAMPNTZ data type. For more information, check NTZOUTPUTFORMAT docsTIMESTAMP.
- timestamp_output_ strformat 
- Specifies the display format for the TIMESTAMP data type alias. For more information, see Date and time input and output formats. For more information, check TIMESTAMPOUTPUTFORMAT docs.
- timestamp_type_ strmapping 
- Specifies the TIMESTAMP** variation that the TIMESTAMP data type alias maps to. For more information, check TIMESTAMP*TYPE_MAPPING docs.
- timestamp_tz_ stroutput_ format 
- Specifies the display format for the TIMESTAMPTZ data type. If no format is specified, defaults to OUTPUT*FORMATTIMESTAMP. For more information, see Date and time input and output formats. For more information, check TIMESTAMPTZOUTPUT*FORMAT docs.
- timezone str
- Specifies the time zone for the session. You can specify a time zone name or a link name from release 2021a of the IANA Time Zone Database (e.g. America/Los_Angeles, Europe/London, UTC, Etc/GMT, etc.). For more information, check TIMEZONE docs.
- trace_level str
- Controls how trace events are ingested into the event table. For more information about trace levels, see Setting trace level. For more information, check TRACE_LEVEL docs.
- transaction_abort_ boolon_ error 
- Specifies the action to perform when a statement issued within a non-autocommit transaction returns with an error. For more information, check TRANSACTIONABORTON_ERROR docs.
- transaction_default_ strisolation_ level 
- Specifies the isolation level for transactions in the user session. For more information, check TRANSACTIONDEFAULTISOLATION_LEVEL docs.
- two_digit_ intcentury_ start 
- Specifies the “century start” year for 2-digit years (i.e. the earliest year such dates can represent). This parameter prevents ambiguous dates when importing or converting data with the YYdate format component (i.e. years represented as 2 digits). For more information, check TWODIGITCENTURY_START docs.
- unsupported_ddl_ straction 
- Determines if an unsupported (i.e. non-default) value specified for a constraint property returns an error. For more information, check UNSUPPORTEDDDLACTION docs.
- use_cached_ boolresult 
- Specifies whether to reuse persisted query results, if available, when a matching query is submitted. For more information, check USECACHEDRESULT docs.
- user_task_ strmanaged_ initial_ warehouse_ size 
- Specifies the size of the compute resources to provision for the first run of the task, before a task history is available for Snowflake to determine an ideal size. Once a task has successfully completed a few runs, Snowflake ignores this parameter setting. Valid values are (case-insensitive): %s. (Conflicts with warehouse). For more information about warehouses, see docs. For more information, check USERTASKMANAGEDINITIALWAREHOUSE_SIZE docs.
- user_task_ intminimum_ trigger_ interval_ in_ seconds 
- Minimum amount of time between Triggered Task executions in seconds For more information, check USERTASKMINIMUMTRIGGERINTERVALINSECONDS docs.
- user_task_ inttimeout_ ms 
- Specifies the time limit on a single run of the task before it times out (in milliseconds). For more information, check USERTASKTIMEOUT_MS docs.
- warehouse str
- The warehouse the task will use. Omit this parameter to use Snowflake-managed compute resources for runs of this task. Due to Snowflake limitations warehouse identifier can consist of only upper-cased letters. (Conflicts with usertaskmanagedinitialwarehouse_size) For more information about this resource, see docs.
- week_of_ intyear_ policy 
- Specifies how the weeks in a given year are computed. 0: The semantics used are equivalent to the ISO semantics, in which a week belongs to a given year if at least 4 days of that week are in that year.1: January 1 is included in the first week of the year and December 31 is included in the last week of the year. For more information, check WEEKOFYEAR_POLICY docs.
- week_start int
- Specifies the first day of the week (used by week-related date functions). 0: Legacy Snowflake behavior is used (i.e. ISO-like semantics).1(Monday) to7(Sunday): All the week-related functions use weeks that start on the specified day of the week. For more information, check WEEK_START docs.
- when str
- Specifies a Boolean SQL expression; multiple conditions joined with AND/OR are supported. When a task is triggered (based on its SCHEDULE or AFTER setting), it validates the conditions of the expression to determine whether to execute. If the conditions of the expression are not met, then the task skips the current run. Any tasks that identify this task as a predecessor also don’t run.
- database String
- The database in which to create the task. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- schema String
- The schema in which to create the task. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- sqlStatement String
- Any single SQL statement, or a call to a stored procedure, executed when the task runs.
- started Boolean
- Specifies if the task should be started or suspended.
- abortDetached BooleanQuery 
- Specifies the action that Snowflake performs for in-progress queries if connectivity is lost due to abrupt termination of a session (e.g. network outage, browser termination, service interruption). For more information, check ABORTDETACHEDQUERY docs.
- afters List<String>
- Specifies one or more predecessor tasks for the current task. Use this option to create a DAG of tasks or add this task to an existing DAG. A DAG is a series of tasks that starts with a scheduled root task and is linked together by dependencies. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- allowOverlapping StringExecution 
- By default, Snowflake ensures that only one instance of a particular DAG is allowed to run at a time, setting the parameter value to TRUE permits DAG runs to overlap. Available options are: "true" or "false". When the value is not set in the configuration the provider will put "default" there which means to use the Snowflake default for this value.
- autocommit Boolean
- Specifies whether autocommit is enabled for the session. Autocommit determines whether a DML statement, when executed without an active transaction, is automatically committed after the statement successfully completes. For more information, see Transactions. For more information, check AUTOCOMMIT docs.
- binaryInput StringFormat 
- The format of VARCHAR values passed as input to VARCHAR-to-BINARY conversion functions. For more information, see Binary input and output. For more information, check BINARYINPUTFORMAT docs.
- binaryOutput StringFormat 
- The format for VARCHAR values returned as output by BINARY-to-VARCHAR conversion functions. For more information, see Binary input and output. For more information, check BINARYOUTPUTFORMAT docs.
- clientMemory NumberLimit 
- Parameter that specifies the maximum amount of memory the JDBC driver or ODBC driver should use for the result set from queries (in MB). For more information, check CLIENTMEMORYLIMIT docs.
- clientMetadata BooleanRequest Use Connection Ctx 
- For specific ODBC functions and JDBC methods, this parameter can change the default search scope from all databases/schemas to the current database/schema. The narrower search typically returns fewer rows and executes more quickly. For more information, check CLIENTMETADATAREQUESTUSECONNECTION_CTX docs.
- clientPrefetch NumberThreads 
- Parameter that specifies the number of threads used by the client to pre-fetch large result sets. The driver will attempt to honor the parameter value, but defines the minimum and maximum values (depending on your system’s resources) to improve performance. For more information, check CLIENTPREFETCHTHREADS docs.
- clientResult NumberChunk Size 
- Parameter that specifies the maximum size of each set (or chunk) of query results to download (in MB). The JDBC driver downloads query results in chunks. For more information, check CLIENTRESULTCHUNK_SIZE docs.
- clientResult BooleanColumn Case Insensitive 
- Parameter that indicates whether to match column name case-insensitively in ResultSet.get* methods in JDBC. For more information, check CLIENTRESULTCOLUMNCASEINSENSITIVE docs.
- clientSession BooleanKeep Alive 
- Parameter that indicates whether to force a user to log in again after a period of inactivity in the session. For more information, check CLIENTSESSIONKEEP_ALIVE docs.
- clientSession NumberKeep Alive Heartbeat Frequency 
- Number of seconds in-between client attempts to update the token for the session. For more information, check CLIENTSESSIONKEEPALIVEHEARTBEAT_FREQUENCY docs.
- clientTimestamp StringType Mapping 
- Specifies the TIMESTAMP_* variation to use when binding timestamp variables for JDBC or ODBC applications that use the bind API to load data. For more information, check CLIENTTIMESTAMPTYPE_MAPPING docs.
- comment String
- Specifies a comment for the task.
- config String
- Specifies a string representation of key value pairs that can be accessed by all tasks in the task graph. Must be in JSON format.
- dateInput StringFormat 
- Specifies the input format for the DATE data type. For more information, see Date and time input and output formats. For more information, check DATEINPUTFORMAT docs.
- dateOutput StringFormat 
- Specifies the display format for the DATE data type. For more information, see Date and time input and output formats. For more information, check DATEOUTPUTFORMAT docs.
- enableUnload BooleanPhysical Type Optimization 
- Specifies whether to set the schema for unloaded Parquet files based on the logical column data types (i.e. the types in the unload SQL query or source table) or on the unloaded column values (i.e. the smallest data types and precision that support the values in the output columns of the unload SQL statement or source table). For more information, check ENABLEUNLOADPHYSICALTYPEOPTIMIZATION docs.
- errorIntegration String
- Specifies the name of the notification integration used for error notifications. Due to technical limitations (read more here), avoid using the following characters: |,.,". For more information about this resource, see docs.
- errorOn BooleanNondeterministic Merge 
- Specifies whether to return an error when the MERGE command is used to update or delete a target row that joins multiple source rows and the system cannot determine the action to perform on the target row. For more information, check ERRORONNONDETERMINISTIC_MERGE docs.
- errorOn BooleanNondeterministic Update 
- Specifies whether to return an error when the UPDATE command is used to update a target row that joins multiple source rows and the system cannot determine the action to perform on the target row. For more information, check ERRORONNONDETERMINISTIC_UPDATE docs.
- finalize String
- Specifies the name of a root task that the finalizer task is associated with. Finalizer tasks run after all other tasks in the task graph run to completion. You can define the SQL of a finalizer task to handle notifications and the release and cleanup of resources that a task graph uses. For more information, see Release and cleanup of task graphs. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- geographyOutput StringFormat 
- Display format for GEOGRAPHY values. For more information, check GEOGRAPHYOUTPUTFORMAT docs.
- geometryOutput StringFormat 
- Display format for GEOMETRY values. For more information, check GEOMETRYOUTPUTFORMAT docs.
- jdbcTreat BooleanTimestamp Ntz As Utc 
- Specifies how JDBC processes TIMESTAMPNTZ values. For more information, check TREATTIMESTAMPNTZASUTC docsJDBC.
- jdbcUse BooleanSession Timezone 
- Specifies whether the JDBC Driver uses the time zone of the JVM or the time zone of the session (specified by the TIMEZONE parameter) for the getDate(), getTime(), and getTimestamp() methods of the ResultSet class. For more information, check JDBCUSESESSION_TIMEZONE docs.
- jsonIndent Number
- Specifies the number of blank spaces to indent each new element in JSON output in the session. Also specifies whether to insert newline characters after each element. For more information, check JSON_INDENT docs.
- lockTimeout Number
- Number of seconds to wait while trying to lock a resource, before timing out and aborting the statement. For more information, check LOCK_TIMEOUT docs.
- logLevel String
- Specifies the severity level of messages that should be ingested and made available in the active event table. Messages at the specified level (and at more severe levels) are ingested. For more information about log levels, see Setting log level. For more information, check LOG_LEVEL docs.
- multiStatement NumberCount 
- Number of statements to execute when using the multi-statement capability. For more information, check MULTISTATEMENTCOUNT docs.
- name String
- Specifies the identifier for the task; must be unique for the database and schema in which the task is created. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- noorderSequence BooleanAs Default 
- Specifies whether the ORDER or NOORDER property is set by default when you create a new sequence or add a new table column. The ORDER and NOORDER properties determine whether or not the values are generated for the sequence or auto-incremented column in increasing or decreasing order. For more information, check NOORDERSEQUENCEAS_DEFAULT docs.
- odbcTreat BooleanDecimal As Int 
- Specifies how ODBC processes columns that have a scale of zero (0). For more information, check ODBCTREATDECIMALASINT docs.
- queryTag String
- Optional string that can be used to tag queries and other SQL statements executed within a session. The tags are displayed in the output of the QUERYHISTORY, QUERYHISTORY*BY** functions. For more information, check QUERY_TAG docs.
- quotedIdentifiers BooleanIgnore Case 
- Specifies whether letters in double-quoted object identifiers are stored and resolved as uppercase letters. By default, Snowflake preserves the case of alphabetic characters when storing and resolving double-quoted identifiers (see Identifier resolution). You can use this parameter in situations in which third-party applications always use double quotes around identifiers. For more information, check QUOTEDIDENTIFIERSIGNORE_CASE docs.
- rowsPer NumberResultset 
- Specifies the maximum number of rows returned in a result set. A value of 0 specifies no maximum. For more information, check ROWSPERRESULTSET docs.
- s3StageVpce StringDns Name 
- Specifies the DNS name of an Amazon S3 interface endpoint. Requests sent to the internal stage of an account via AWS PrivateLink for Amazon S3 use this endpoint to connect. For more information, see Accessing Internal stages with dedicated interface endpoints. For more information, check S3STAGEVPCEDNSNAME docs.
- schedule Property Map
- The schedule for periodically running the task. This can be a cron or interval in minutes. (Conflicts with finalize and after; when set, one of the sub-fields minutesorusing_cronshould be set)
- searchPath String
- Specifies the path to search to resolve unqualified object names in queries. For more information, see Name resolution in queries. Comma-separated list of identifiers. An identifier can be a fully or partially qualified schema name. For more information, check SEARCH_PATH docs.
- statementQueued NumberTimeout In Seconds 
- Amount of time, in seconds, a SQL statement (query, DDL, DML, etc.) remains queued for a warehouse before it is canceled by the system. This parameter can be used in conjunction with the MAXCONCURRENCYLEVEL parameter to ensure a warehouse is never backlogged. For more information, check STATEMENTQUEUEDTIMEOUTINSECONDS docs.
- statementTimeout NumberIn Seconds 
- Amount of time, in seconds, after which a running SQL statement (query, DDL, DML, etc.) is canceled by the system. For more information, check STATEMENTTIMEOUTIN_SECONDS docs.
- strictJson BooleanOutput 
- This parameter specifies whether JSON output in a session is compatible with the general standard (as described by http://json.org). By design, Snowflake allows JSON input that contains non-standard values; however, these non-standard values might result in Snowflake outputting JSON that is incompatible with other platforms and languages. This parameter, when enabled, ensures that Snowflake outputs valid/compatible JSON. For more information, check STRICTJSONOUTPUT docs.
- suspendTask NumberAfter Num Failures 
- Specifies the number of consecutive failed task runs after which the current task is suspended automatically. The default is 0 (no automatic suspension). For more information, check SUSPENDTASKAFTERNUMFAILURES docs.
- taskAuto NumberRetry Attempts 
- Specifies the number of automatic task graph retry attempts. If any task graphs complete in a FAILED state, Snowflake can automatically retry the task graphs from the last task in the graph that failed. For more information, check TASKAUTORETRY_ATTEMPTS docs.
- timeInput StringFormat 
- Specifies the input format for the TIME data type. For more information, see Date and time input and output formats. Any valid, supported time format or AUTO (AUTO specifies that Snowflake attempts to automatically detect the format of times stored in the system during the session). For more information, check TIMEINPUTFORMAT docs.
- timeOutput StringFormat 
- Specifies the display format for the TIME data type. For more information, see Date and time input and output formats. For more information, check TIMEOUTPUTFORMAT docs.
- timestampDay BooleanIs Always24h 
- Specifies whether the DATEADD function (and its aliases) always consider a day to be exactly 24 hours for expressions that span multiple days. For more information, check TIMESTAMPDAYISALWAYS24H docs.
- timestampInput StringFormat 
- Specifies the input format for the TIMESTAMP data type alias. For more information, see Date and time input and output formats. Any valid, supported timestamp format or AUTO (AUTO specifies that Snowflake attempts to automatically detect the format of timestamps stored in the system during the session). For more information, check TIMESTAMPINPUTFORMAT docs.
- timestampLtz StringOutput Format 
- Specifies the display format for the TIMESTAMPLTZ data type. If no format is specified, defaults to OUTPUT*FORMATTIMESTAMP. For more information, see Date and time input and output formats. For more information, check TIMESTAMPLTZOUTPUT*FORMAT docs.
- timestampNtz StringOutput Format 
- Specifies the display format for the TIMESTAMPNTZ data type. For more information, check NTZOUTPUTFORMAT docsTIMESTAMP.
- timestampOutput StringFormat 
- Specifies the display format for the TIMESTAMP data type alias. For more information, see Date and time input and output formats. For more information, check TIMESTAMPOUTPUTFORMAT docs.
- timestampType StringMapping 
- Specifies the TIMESTAMP** variation that the TIMESTAMP data type alias maps to. For more information, check TIMESTAMP*TYPE_MAPPING docs.
- timestampTz StringOutput Format 
- Specifies the display format for the TIMESTAMPTZ data type. If no format is specified, defaults to OUTPUT*FORMATTIMESTAMP. For more information, see Date and time input and output formats. For more information, check TIMESTAMPTZOUTPUT*FORMAT docs.
- timezone String
- Specifies the time zone for the session. You can specify a time zone name or a link name from release 2021a of the IANA Time Zone Database (e.g. America/Los_Angeles, Europe/London, UTC, Etc/GMT, etc.). For more information, check TIMEZONE docs.
- traceLevel String
- Controls how trace events are ingested into the event table. For more information about trace levels, see Setting trace level. For more information, check TRACE_LEVEL docs.
- transactionAbort BooleanOn Error 
- Specifies the action to perform when a statement issued within a non-autocommit transaction returns with an error. For more information, check TRANSACTIONABORTON_ERROR docs.
- transactionDefault StringIsolation Level 
- Specifies the isolation level for transactions in the user session. For more information, check TRANSACTIONDEFAULTISOLATION_LEVEL docs.
- twoDigit NumberCentury Start 
- Specifies the “century start” year for 2-digit years (i.e. the earliest year such dates can represent). This parameter prevents ambiguous dates when importing or converting data with the YYdate format component (i.e. years represented as 2 digits). For more information, check TWODIGITCENTURY_START docs.
- unsupportedDdl StringAction 
- Determines if an unsupported (i.e. non-default) value specified for a constraint property returns an error. For more information, check UNSUPPORTEDDDLACTION docs.
- useCached BooleanResult 
- Specifies whether to reuse persisted query results, if available, when a matching query is submitted. For more information, check USECACHEDRESULT docs.
- userTask StringManaged Initial Warehouse Size 
- Specifies the size of the compute resources to provision for the first run of the task, before a task history is available for Snowflake to determine an ideal size. Once a task has successfully completed a few runs, Snowflake ignores this parameter setting. Valid values are (case-insensitive): %s. (Conflicts with warehouse). For more information about warehouses, see docs. For more information, check USERTASKMANAGEDINITIALWAREHOUSE_SIZE docs.
- userTask NumberMinimum Trigger Interval In Seconds 
- Minimum amount of time between Triggered Task executions in seconds For more information, check USERTASKMINIMUMTRIGGERINTERVALINSECONDS docs.
- userTask NumberTimeout Ms 
- Specifies the time limit on a single run of the task before it times out (in milliseconds). For more information, check USERTASKTIMEOUT_MS docs.
- warehouse String
- The warehouse the task will use. Omit this parameter to use Snowflake-managed compute resources for runs of this task. Due to Snowflake limitations warehouse identifier can consist of only upper-cased letters. (Conflicts with usertaskmanagedinitialwarehouse_size) For more information about this resource, see docs.
- weekOf NumberYear Policy 
- Specifies how the weeks in a given year are computed. 0: The semantics used are equivalent to the ISO semantics, in which a week belongs to a given year if at least 4 days of that week are in that year.1: January 1 is included in the first week of the year and December 31 is included in the last week of the year. For more information, check WEEKOFYEAR_POLICY docs.
- weekStart Number
- Specifies the first day of the week (used by week-related date functions). 0: Legacy Snowflake behavior is used (i.e. ISO-like semantics).1(Monday) to7(Sunday): All the week-related functions use weeks that start on the specified day of the week. For more information, check WEEK_START docs.
- when String
- Specifies a Boolean SQL expression; multiple conditions joined with AND/OR are supported. When a task is triggered (based on its SCHEDULE or AFTER setting), it validates the conditions of the expression to determine whether to execute. If the conditions of the expression are not met, then the task skips the current run. Any tasks that identify this task as a predecessor also don’t run.
Outputs
All input properties are implicitly available as output properties. Additionally, the Task resource produces the following output properties:
- FullyQualified stringName 
- Fully qualified name of the resource. For more information, see object name resolution.
- Id string
- The provider-assigned unique ID for this managed resource.
- Parameters
List<TaskParameter> 
- Outputs the result of SHOW PARAMETERS IN TASKfor the given task.
- ShowOutputs List<TaskShow Output> 
- Outputs the result of SHOW TASKSfor the given task.
- FullyQualified stringName 
- Fully qualified name of the resource. For more information, see object name resolution.
- Id string
- The provider-assigned unique ID for this managed resource.
- Parameters
[]TaskParameter 
- Outputs the result of SHOW PARAMETERS IN TASKfor the given task.
- ShowOutputs []TaskShow Output 
- Outputs the result of SHOW TASKSfor the given task.
- fullyQualified StringName 
- Fully qualified name of the resource. For more information, see object name resolution.
- id String
- The provider-assigned unique ID for this managed resource.
- parameters
List<TaskParameter> 
- Outputs the result of SHOW PARAMETERS IN TASKfor the given task.
- showOutputs List<TaskShow Output> 
- Outputs the result of SHOW TASKSfor the given task.
- fullyQualified stringName 
- Fully qualified name of the resource. For more information, see object name resolution.
- id string
- The provider-assigned unique ID for this managed resource.
- parameters
TaskParameter[] 
- Outputs the result of SHOW PARAMETERS IN TASKfor the given task.
- showOutputs TaskShow Output[] 
- Outputs the result of SHOW TASKSfor the given task.
- fully_qualified_ strname 
- Fully qualified name of the resource. For more information, see object name resolution.
- id str
- The provider-assigned unique ID for this managed resource.
- parameters
Sequence[TaskParameter] 
- Outputs the result of SHOW PARAMETERS IN TASKfor the given task.
- show_outputs Sequence[TaskShow Output] 
- Outputs the result of SHOW TASKSfor the given task.
- fullyQualified StringName 
- Fully qualified name of the resource. For more information, see object name resolution.
- id String
- The provider-assigned unique ID for this managed resource.
- parameters List<Property Map>
- Outputs the result of SHOW PARAMETERS IN TASKfor the given task.
- showOutputs List<Property Map>
- Outputs the result of SHOW TASKSfor the given task.
Look up Existing Task Resource
Get an existing Task resource’s state with the given name, ID, and optional extra properties used to qualify the lookup.
public static get(name: string, id: Input<ID>, state?: TaskState, opts?: CustomResourceOptions): Task@staticmethod
def get(resource_name: str,
        id: str,
        opts: Optional[ResourceOptions] = None,
        abort_detached_query: Optional[bool] = None,
        afters: Optional[Sequence[str]] = None,
        allow_overlapping_execution: Optional[str] = None,
        autocommit: Optional[bool] = None,
        binary_input_format: Optional[str] = None,
        binary_output_format: Optional[str] = None,
        client_memory_limit: Optional[int] = None,
        client_metadata_request_use_connection_ctx: Optional[bool] = None,
        client_prefetch_threads: Optional[int] = None,
        client_result_chunk_size: Optional[int] = None,
        client_result_column_case_insensitive: Optional[bool] = None,
        client_session_keep_alive: Optional[bool] = None,
        client_session_keep_alive_heartbeat_frequency: Optional[int] = None,
        client_timestamp_type_mapping: Optional[str] = None,
        comment: Optional[str] = None,
        config: Optional[str] = None,
        database: Optional[str] = None,
        date_input_format: Optional[str] = None,
        date_output_format: Optional[str] = None,
        enable_unload_physical_type_optimization: Optional[bool] = None,
        error_integration: Optional[str] = None,
        error_on_nondeterministic_merge: Optional[bool] = None,
        error_on_nondeterministic_update: Optional[bool] = None,
        finalize: Optional[str] = None,
        fully_qualified_name: Optional[str] = None,
        geography_output_format: Optional[str] = None,
        geometry_output_format: Optional[str] = None,
        jdbc_treat_timestamp_ntz_as_utc: Optional[bool] = None,
        jdbc_use_session_timezone: Optional[bool] = None,
        json_indent: Optional[int] = None,
        lock_timeout: Optional[int] = None,
        log_level: Optional[str] = None,
        multi_statement_count: Optional[int] = None,
        name: Optional[str] = None,
        noorder_sequence_as_default: Optional[bool] = None,
        odbc_treat_decimal_as_int: Optional[bool] = None,
        parameters: Optional[Sequence[TaskParameterArgs]] = None,
        query_tag: Optional[str] = None,
        quoted_identifiers_ignore_case: Optional[bool] = None,
        rows_per_resultset: Optional[int] = None,
        s3_stage_vpce_dns_name: Optional[str] = None,
        schedule: Optional[TaskScheduleArgs] = None,
        schema: Optional[str] = None,
        search_path: Optional[str] = None,
        show_outputs: Optional[Sequence[TaskShowOutputArgs]] = None,
        sql_statement: Optional[str] = None,
        started: Optional[bool] = None,
        statement_queued_timeout_in_seconds: Optional[int] = None,
        statement_timeout_in_seconds: Optional[int] = None,
        strict_json_output: Optional[bool] = None,
        suspend_task_after_num_failures: Optional[int] = None,
        task_auto_retry_attempts: Optional[int] = None,
        time_input_format: Optional[str] = None,
        time_output_format: Optional[str] = None,
        timestamp_day_is_always24h: Optional[bool] = None,
        timestamp_input_format: Optional[str] = None,
        timestamp_ltz_output_format: Optional[str] = None,
        timestamp_ntz_output_format: Optional[str] = None,
        timestamp_output_format: Optional[str] = None,
        timestamp_type_mapping: Optional[str] = None,
        timestamp_tz_output_format: Optional[str] = None,
        timezone: Optional[str] = None,
        trace_level: Optional[str] = None,
        transaction_abort_on_error: Optional[bool] = None,
        transaction_default_isolation_level: Optional[str] = None,
        two_digit_century_start: Optional[int] = None,
        unsupported_ddl_action: Optional[str] = None,
        use_cached_result: Optional[bool] = None,
        user_task_managed_initial_warehouse_size: Optional[str] = None,
        user_task_minimum_trigger_interval_in_seconds: Optional[int] = None,
        user_task_timeout_ms: Optional[int] = None,
        warehouse: Optional[str] = None,
        week_of_year_policy: Optional[int] = None,
        week_start: Optional[int] = None,
        when: Optional[str] = None) -> Taskfunc GetTask(ctx *Context, name string, id IDInput, state *TaskState, opts ...ResourceOption) (*Task, error)public static Task Get(string name, Input<string> id, TaskState? state, CustomResourceOptions? opts = null)public static Task get(String name, Output<String> id, TaskState state, CustomResourceOptions options)resources:  _:    type: snowflake:Task    get:      id: ${id}- name
- The unique name of the resulting resource.
- id
- The unique provider ID of the resource to lookup.
- state
- Any extra arguments used during the lookup.
- opts
- A bag of options that control this resource's behavior.
- resource_name
- The unique name of the resulting resource.
- id
- The unique provider ID of the resource to lookup.
- name
- The unique name of the resulting resource.
- id
- The unique provider ID of the resource to lookup.
- state
- Any extra arguments used during the lookup.
- opts
- A bag of options that control this resource's behavior.
- name
- The unique name of the resulting resource.
- id
- The unique provider ID of the resource to lookup.
- state
- Any extra arguments used during the lookup.
- opts
- A bag of options that control this resource's behavior.
- name
- The unique name of the resulting resource.
- id
- The unique provider ID of the resource to lookup.
- state
- Any extra arguments used during the lookup.
- opts
- A bag of options that control this resource's behavior.
- AbortDetached boolQuery 
- Specifies the action that Snowflake performs for in-progress queries if connectivity is lost due to abrupt termination of a session (e.g. network outage, browser termination, service interruption). For more information, check ABORTDETACHEDQUERY docs.
- Afters List<string>
- Specifies one or more predecessor tasks for the current task. Use this option to create a DAG of tasks or add this task to an existing DAG. A DAG is a series of tasks that starts with a scheduled root task and is linked together by dependencies. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- AllowOverlapping stringExecution 
- By default, Snowflake ensures that only one instance of a particular DAG is allowed to run at a time, setting the parameter value to TRUE permits DAG runs to overlap. Available options are: "true" or "false". When the value is not set in the configuration the provider will put "default" there which means to use the Snowflake default for this value.
- Autocommit bool
- Specifies whether autocommit is enabled for the session. Autocommit determines whether a DML statement, when executed without an active transaction, is automatically committed after the statement successfully completes. For more information, see Transactions. For more information, check AUTOCOMMIT docs.
- BinaryInput stringFormat 
- The format of VARCHAR values passed as input to VARCHAR-to-BINARY conversion functions. For more information, see Binary input and output. For more information, check BINARYINPUTFORMAT docs.
- BinaryOutput stringFormat 
- The format for VARCHAR values returned as output by BINARY-to-VARCHAR conversion functions. For more information, see Binary input and output. For more information, check BINARYOUTPUTFORMAT docs.
- ClientMemory intLimit 
- Parameter that specifies the maximum amount of memory the JDBC driver or ODBC driver should use for the result set from queries (in MB). For more information, check CLIENTMEMORYLIMIT docs.
- ClientMetadata boolRequest Use Connection Ctx 
- For specific ODBC functions and JDBC methods, this parameter can change the default search scope from all databases/schemas to the current database/schema. The narrower search typically returns fewer rows and executes more quickly. For more information, check CLIENTMETADATAREQUESTUSECONNECTION_CTX docs.
- ClientPrefetch intThreads 
- Parameter that specifies the number of threads used by the client to pre-fetch large result sets. The driver will attempt to honor the parameter value, but defines the minimum and maximum values (depending on your system’s resources) to improve performance. For more information, check CLIENTPREFETCHTHREADS docs.
- ClientResult intChunk Size 
- Parameter that specifies the maximum size of each set (or chunk) of query results to download (in MB). The JDBC driver downloads query results in chunks. For more information, check CLIENTRESULTCHUNK_SIZE docs.
- ClientResult boolColumn Case Insensitive 
- Parameter that indicates whether to match column name case-insensitively in ResultSet.get* methods in JDBC. For more information, check CLIENTRESULTCOLUMNCASEINSENSITIVE docs.
- ClientSession boolKeep Alive 
- Parameter that indicates whether to force a user to log in again after a period of inactivity in the session. For more information, check CLIENTSESSIONKEEP_ALIVE docs.
- ClientSession intKeep Alive Heartbeat Frequency 
- Number of seconds in-between client attempts to update the token for the session. For more information, check CLIENTSESSIONKEEPALIVEHEARTBEAT_FREQUENCY docs.
- ClientTimestamp stringType Mapping 
- Specifies the TIMESTAMP_* variation to use when binding timestamp variables for JDBC or ODBC applications that use the bind API to load data. For more information, check CLIENTTIMESTAMPTYPE_MAPPING docs.
- Comment string
- Specifies a comment for the task.
- Config string
- Specifies a string representation of key value pairs that can be accessed by all tasks in the task graph. Must be in JSON format.
- Database string
- The database in which to create the task. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- DateInput stringFormat 
- Specifies the input format for the DATE data type. For more information, see Date and time input and output formats. For more information, check DATEINPUTFORMAT docs.
- DateOutput stringFormat 
- Specifies the display format for the DATE data type. For more information, see Date and time input and output formats. For more information, check DATEOUTPUTFORMAT docs.
- EnableUnload boolPhysical Type Optimization 
- Specifies whether to set the schema for unloaded Parquet files based on the logical column data types (i.e. the types in the unload SQL query or source table) or on the unloaded column values (i.e. the smallest data types and precision that support the values in the output columns of the unload SQL statement or source table). For more information, check ENABLEUNLOADPHYSICALTYPEOPTIMIZATION docs.
- ErrorIntegration string
- Specifies the name of the notification integration used for error notifications. Due to technical limitations (read more here), avoid using the following characters: |,.,". For more information about this resource, see docs.
- ErrorOn boolNondeterministic Merge 
- Specifies whether to return an error when the MERGE command is used to update or delete a target row that joins multiple source rows and the system cannot determine the action to perform on the target row. For more information, check ERRORONNONDETERMINISTIC_MERGE docs.
- ErrorOn boolNondeterministic Update 
- Specifies whether to return an error when the UPDATE command is used to update a target row that joins multiple source rows and the system cannot determine the action to perform on the target row. For more information, check ERRORONNONDETERMINISTIC_UPDATE docs.
- Finalize string
- Specifies the name of a root task that the finalizer task is associated with. Finalizer tasks run after all other tasks in the task graph run to completion. You can define the SQL of a finalizer task to handle notifications and the release and cleanup of resources that a task graph uses. For more information, see Release and cleanup of task graphs. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- FullyQualified stringName 
- Fully qualified name of the resource. For more information, see object name resolution.
- GeographyOutput stringFormat 
- Display format for GEOGRAPHY values. For more information, check GEOGRAPHYOUTPUTFORMAT docs.
- GeometryOutput stringFormat 
- Display format for GEOMETRY values. For more information, check GEOMETRYOUTPUTFORMAT docs.
- JdbcTreat boolTimestamp Ntz As Utc 
- Specifies how JDBC processes TIMESTAMPNTZ values. For more information, check TREATTIMESTAMPNTZASUTC docsJDBC.
- JdbcUse boolSession Timezone 
- Specifies whether the JDBC Driver uses the time zone of the JVM or the time zone of the session (specified by the TIMEZONE parameter) for the getDate(), getTime(), and getTimestamp() methods of the ResultSet class. For more information, check JDBCUSESESSION_TIMEZONE docs.
- JsonIndent int
- Specifies the number of blank spaces to indent each new element in JSON output in the session. Also specifies whether to insert newline characters after each element. For more information, check JSON_INDENT docs.
- LockTimeout int
- Number of seconds to wait while trying to lock a resource, before timing out and aborting the statement. For more information, check LOCK_TIMEOUT docs.
- LogLevel string
- Specifies the severity level of messages that should be ingested and made available in the active event table. Messages at the specified level (and at more severe levels) are ingested. For more information about log levels, see Setting log level. For more information, check LOG_LEVEL docs.
- MultiStatement intCount 
- Number of statements to execute when using the multi-statement capability. For more information, check MULTISTATEMENTCOUNT docs.
- Name string
- Specifies the identifier for the task; must be unique for the database and schema in which the task is created. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- NoorderSequence boolAs Default 
- Specifies whether the ORDER or NOORDER property is set by default when you create a new sequence or add a new table column. The ORDER and NOORDER properties determine whether or not the values are generated for the sequence or auto-incremented column in increasing or decreasing order. For more information, check NOORDERSEQUENCEAS_DEFAULT docs.
- OdbcTreat boolDecimal As Int 
- Specifies how ODBC processes columns that have a scale of zero (0). For more information, check ODBCTREATDECIMALASINT docs.
- Parameters
List<TaskParameter> 
- Outputs the result of SHOW PARAMETERS IN TASKfor the given task.
- QueryTag string
- Optional string that can be used to tag queries and other SQL statements executed within a session. The tags are displayed in the output of the QUERYHISTORY, QUERYHISTORY*BY** functions. For more information, check QUERY_TAG docs.
- QuotedIdentifiers boolIgnore Case 
- Specifies whether letters in double-quoted object identifiers are stored and resolved as uppercase letters. By default, Snowflake preserves the case of alphabetic characters when storing and resolving double-quoted identifiers (see Identifier resolution). You can use this parameter in situations in which third-party applications always use double quotes around identifiers. For more information, check QUOTEDIDENTIFIERSIGNORE_CASE docs.
- RowsPer intResultset 
- Specifies the maximum number of rows returned in a result set. A value of 0 specifies no maximum. For more information, check ROWSPERRESULTSET docs.
- S3StageVpce stringDns Name 
- Specifies the DNS name of an Amazon S3 interface endpoint. Requests sent to the internal stage of an account via AWS PrivateLink for Amazon S3 use this endpoint to connect. For more information, see Accessing Internal stages with dedicated interface endpoints. For more information, check S3STAGEVPCEDNSNAME docs.
- Schedule
TaskSchedule 
- The schedule for periodically running the task. This can be a cron or interval in minutes. (Conflicts with finalize and after; when set, one of the sub-fields minutesorusing_cronshould be set)
- Schema string
- The schema in which to create the task. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- SearchPath string
- Specifies the path to search to resolve unqualified object names in queries. For more information, see Name resolution in queries. Comma-separated list of identifiers. An identifier can be a fully or partially qualified schema name. For more information, check SEARCH_PATH docs.
- ShowOutputs List<TaskShow Output> 
- Outputs the result of SHOW TASKSfor the given task.
- SqlStatement string
- Any single SQL statement, or a call to a stored procedure, executed when the task runs.
- Started bool
- Specifies if the task should be started or suspended.
- StatementQueued intTimeout In Seconds 
- Amount of time, in seconds, a SQL statement (query, DDL, DML, etc.) remains queued for a warehouse before it is canceled by the system. This parameter can be used in conjunction with the MAXCONCURRENCYLEVEL parameter to ensure a warehouse is never backlogged. For more information, check STATEMENTQUEUEDTIMEOUTINSECONDS docs.
- StatementTimeout intIn Seconds 
- Amount of time, in seconds, after which a running SQL statement (query, DDL, DML, etc.) is canceled by the system. For more information, check STATEMENTTIMEOUTIN_SECONDS docs.
- StrictJson boolOutput 
- This parameter specifies whether JSON output in a session is compatible with the general standard (as described by http://json.org). By design, Snowflake allows JSON input that contains non-standard values; however, these non-standard values might result in Snowflake outputting JSON that is incompatible with other platforms and languages. This parameter, when enabled, ensures that Snowflake outputs valid/compatible JSON. For more information, check STRICTJSONOUTPUT docs.
- SuspendTask intAfter Num Failures 
- Specifies the number of consecutive failed task runs after which the current task is suspended automatically. The default is 0 (no automatic suspension). For more information, check SUSPENDTASKAFTERNUMFAILURES docs.
- TaskAuto intRetry Attempts 
- Specifies the number of automatic task graph retry attempts. If any task graphs complete in a FAILED state, Snowflake can automatically retry the task graphs from the last task in the graph that failed. For more information, check TASKAUTORETRY_ATTEMPTS docs.
- TimeInput stringFormat 
- Specifies the input format for the TIME data type. For more information, see Date and time input and output formats. Any valid, supported time format or AUTO (AUTO specifies that Snowflake attempts to automatically detect the format of times stored in the system during the session). For more information, check TIMEINPUTFORMAT docs.
- TimeOutput stringFormat 
- Specifies the display format for the TIME data type. For more information, see Date and time input and output formats. For more information, check TIMEOUTPUTFORMAT docs.
- TimestampDay boolIs Always24h 
- Specifies whether the DATEADD function (and its aliases) always consider a day to be exactly 24 hours for expressions that span multiple days. For more information, check TIMESTAMPDAYISALWAYS24H docs.
- TimestampInput stringFormat 
- Specifies the input format for the TIMESTAMP data type alias. For more information, see Date and time input and output formats. Any valid, supported timestamp format or AUTO (AUTO specifies that Snowflake attempts to automatically detect the format of timestamps stored in the system during the session). For more information, check TIMESTAMPINPUTFORMAT docs.
- TimestampLtz stringOutput Format 
- Specifies the display format for the TIMESTAMPLTZ data type. If no format is specified, defaults to OUTPUT*FORMATTIMESTAMP. For more information, see Date and time input and output formats. For more information, check TIMESTAMPLTZOUTPUT*FORMAT docs.
- TimestampNtz stringOutput Format 
- Specifies the display format for the TIMESTAMPNTZ data type. For more information, check NTZOUTPUTFORMAT docsTIMESTAMP.
- TimestampOutput stringFormat 
- Specifies the display format for the TIMESTAMP data type alias. For more information, see Date and time input and output formats. For more information, check TIMESTAMPOUTPUTFORMAT docs.
- TimestampType stringMapping 
- Specifies the TIMESTAMP** variation that the TIMESTAMP data type alias maps to. For more information, check TIMESTAMP*TYPE_MAPPING docs.
- TimestampTz stringOutput Format 
- Specifies the display format for the TIMESTAMPTZ data type. If no format is specified, defaults to OUTPUT*FORMATTIMESTAMP. For more information, see Date and time input and output formats. For more information, check TIMESTAMPTZOUTPUT*FORMAT docs.
- Timezone string
- Specifies the time zone for the session. You can specify a time zone name or a link name from release 2021a of the IANA Time Zone Database (e.g. America/Los_Angeles, Europe/London, UTC, Etc/GMT, etc.). For more information, check TIMEZONE docs.
- TraceLevel string
- Controls how trace events are ingested into the event table. For more information about trace levels, see Setting trace level. For more information, check TRACE_LEVEL docs.
- TransactionAbort boolOn Error 
- Specifies the action to perform when a statement issued within a non-autocommit transaction returns with an error. For more information, check TRANSACTIONABORTON_ERROR docs.
- TransactionDefault stringIsolation Level 
- Specifies the isolation level for transactions in the user session. For more information, check TRANSACTIONDEFAULTISOLATION_LEVEL docs.
- TwoDigit intCentury Start 
- Specifies the “century start” year for 2-digit years (i.e. the earliest year such dates can represent). This parameter prevents ambiguous dates when importing or converting data with the YYdate format component (i.e. years represented as 2 digits). For more information, check TWODIGITCENTURY_START docs.
- UnsupportedDdl stringAction 
- Determines if an unsupported (i.e. non-default) value specified for a constraint property returns an error. For more information, check UNSUPPORTEDDDLACTION docs.
- UseCached boolResult 
- Specifies whether to reuse persisted query results, if available, when a matching query is submitted. For more information, check USECACHEDRESULT docs.
- UserTask stringManaged Initial Warehouse Size 
- Specifies the size of the compute resources to provision for the first run of the task, before a task history is available for Snowflake to determine an ideal size. Once a task has successfully completed a few runs, Snowflake ignores this parameter setting. Valid values are (case-insensitive): %s. (Conflicts with warehouse). For more information about warehouses, see docs. For more information, check USERTASKMANAGEDINITIALWAREHOUSE_SIZE docs.
- UserTask intMinimum Trigger Interval In Seconds 
- Minimum amount of time between Triggered Task executions in seconds For more information, check USERTASKMINIMUMTRIGGERINTERVALINSECONDS docs.
- UserTask intTimeout Ms 
- Specifies the time limit on a single run of the task before it times out (in milliseconds). For more information, check USERTASKTIMEOUT_MS docs.
- Warehouse string
- The warehouse the task will use. Omit this parameter to use Snowflake-managed compute resources for runs of this task. Due to Snowflake limitations warehouse identifier can consist of only upper-cased letters. (Conflicts with usertaskmanagedinitialwarehouse_size) For more information about this resource, see docs.
- WeekOf intYear Policy 
- Specifies how the weeks in a given year are computed. 0: The semantics used are equivalent to the ISO semantics, in which a week belongs to a given year if at least 4 days of that week are in that year.1: January 1 is included in the first week of the year and December 31 is included in the last week of the year. For more information, check WEEKOFYEAR_POLICY docs.
- WeekStart int
- Specifies the first day of the week (used by week-related date functions). 0: Legacy Snowflake behavior is used (i.e. ISO-like semantics).1(Monday) to7(Sunday): All the week-related functions use weeks that start on the specified day of the week. For more information, check WEEK_START docs.
- When string
- Specifies a Boolean SQL expression; multiple conditions joined with AND/OR are supported. When a task is triggered (based on its SCHEDULE or AFTER setting), it validates the conditions of the expression to determine whether to execute. If the conditions of the expression are not met, then the task skips the current run. Any tasks that identify this task as a predecessor also don’t run.
- AbortDetached boolQuery 
- Specifies the action that Snowflake performs for in-progress queries if connectivity is lost due to abrupt termination of a session (e.g. network outage, browser termination, service interruption). For more information, check ABORTDETACHEDQUERY docs.
- Afters []string
- Specifies one or more predecessor tasks for the current task. Use this option to create a DAG of tasks or add this task to an existing DAG. A DAG is a series of tasks that starts with a scheduled root task and is linked together by dependencies. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- AllowOverlapping stringExecution 
- By default, Snowflake ensures that only one instance of a particular DAG is allowed to run at a time, setting the parameter value to TRUE permits DAG runs to overlap. Available options are: "true" or "false". When the value is not set in the configuration the provider will put "default" there which means to use the Snowflake default for this value.
- Autocommit bool
- Specifies whether autocommit is enabled for the session. Autocommit determines whether a DML statement, when executed without an active transaction, is automatically committed after the statement successfully completes. For more information, see Transactions. For more information, check AUTOCOMMIT docs.
- BinaryInput stringFormat 
- The format of VARCHAR values passed as input to VARCHAR-to-BINARY conversion functions. For more information, see Binary input and output. For more information, check BINARYINPUTFORMAT docs.
- BinaryOutput stringFormat 
- The format for VARCHAR values returned as output by BINARY-to-VARCHAR conversion functions. For more information, see Binary input and output. For more information, check BINARYOUTPUTFORMAT docs.
- ClientMemory intLimit 
- Parameter that specifies the maximum amount of memory the JDBC driver or ODBC driver should use for the result set from queries (in MB). For more information, check CLIENTMEMORYLIMIT docs.
- ClientMetadata boolRequest Use Connection Ctx 
- For specific ODBC functions and JDBC methods, this parameter can change the default search scope from all databases/schemas to the current database/schema. The narrower search typically returns fewer rows and executes more quickly. For more information, check CLIENTMETADATAREQUESTUSECONNECTION_CTX docs.
- ClientPrefetch intThreads 
- Parameter that specifies the number of threads used by the client to pre-fetch large result sets. The driver will attempt to honor the parameter value, but defines the minimum and maximum values (depending on your system’s resources) to improve performance. For more information, check CLIENTPREFETCHTHREADS docs.
- ClientResult intChunk Size 
- Parameter that specifies the maximum size of each set (or chunk) of query results to download (in MB). The JDBC driver downloads query results in chunks. For more information, check CLIENTRESULTCHUNK_SIZE docs.
- ClientResult boolColumn Case Insensitive 
- Parameter that indicates whether to match column name case-insensitively in ResultSet.get* methods in JDBC. For more information, check CLIENTRESULTCOLUMNCASEINSENSITIVE docs.
- ClientSession boolKeep Alive 
- Parameter that indicates whether to force a user to log in again after a period of inactivity in the session. For more information, check CLIENTSESSIONKEEP_ALIVE docs.
- ClientSession intKeep Alive Heartbeat Frequency 
- Number of seconds in-between client attempts to update the token for the session. For more information, check CLIENTSESSIONKEEPALIVEHEARTBEAT_FREQUENCY docs.
- ClientTimestamp stringType Mapping 
- Specifies the TIMESTAMP_* variation to use when binding timestamp variables for JDBC or ODBC applications that use the bind API to load data. For more information, check CLIENTTIMESTAMPTYPE_MAPPING docs.
- Comment string
- Specifies a comment for the task.
- Config string
- Specifies a string representation of key value pairs that can be accessed by all tasks in the task graph. Must be in JSON format.
- Database string
- The database in which to create the task. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- DateInput stringFormat 
- Specifies the input format for the DATE data type. For more information, see Date and time input and output formats. For more information, check DATEINPUTFORMAT docs.
- DateOutput stringFormat 
- Specifies the display format for the DATE data type. For more information, see Date and time input and output formats. For more information, check DATEOUTPUTFORMAT docs.
- EnableUnload boolPhysical Type Optimization 
- Specifies whether to set the schema for unloaded Parquet files based on the logical column data types (i.e. the types in the unload SQL query or source table) or on the unloaded column values (i.e. the smallest data types and precision that support the values in the output columns of the unload SQL statement or source table). For more information, check ENABLEUNLOADPHYSICALTYPEOPTIMIZATION docs.
- ErrorIntegration string
- Specifies the name of the notification integration used for error notifications. Due to technical limitations (read more here), avoid using the following characters: |,.,". For more information about this resource, see docs.
- ErrorOn boolNondeterministic Merge 
- Specifies whether to return an error when the MERGE command is used to update or delete a target row that joins multiple source rows and the system cannot determine the action to perform on the target row. For more information, check ERRORONNONDETERMINISTIC_MERGE docs.
- ErrorOn boolNondeterministic Update 
- Specifies whether to return an error when the UPDATE command is used to update a target row that joins multiple source rows and the system cannot determine the action to perform on the target row. For more information, check ERRORONNONDETERMINISTIC_UPDATE docs.
- Finalize string
- Specifies the name of a root task that the finalizer task is associated with. Finalizer tasks run after all other tasks in the task graph run to completion. You can define the SQL of a finalizer task to handle notifications and the release and cleanup of resources that a task graph uses. For more information, see Release and cleanup of task graphs. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- FullyQualified stringName 
- Fully qualified name of the resource. For more information, see object name resolution.
- GeographyOutput stringFormat 
- Display format for GEOGRAPHY values. For more information, check GEOGRAPHYOUTPUTFORMAT docs.
- GeometryOutput stringFormat 
- Display format for GEOMETRY values. For more information, check GEOMETRYOUTPUTFORMAT docs.
- JdbcTreat boolTimestamp Ntz As Utc 
- Specifies how JDBC processes TIMESTAMPNTZ values. For more information, check TREATTIMESTAMPNTZASUTC docsJDBC.
- JdbcUse boolSession Timezone 
- Specifies whether the JDBC Driver uses the time zone of the JVM or the time zone of the session (specified by the TIMEZONE parameter) for the getDate(), getTime(), and getTimestamp() methods of the ResultSet class. For more information, check JDBCUSESESSION_TIMEZONE docs.
- JsonIndent int
- Specifies the number of blank spaces to indent each new element in JSON output in the session. Also specifies whether to insert newline characters after each element. For more information, check JSON_INDENT docs.
- LockTimeout int
- Number of seconds to wait while trying to lock a resource, before timing out and aborting the statement. For more information, check LOCK_TIMEOUT docs.
- LogLevel string
- Specifies the severity level of messages that should be ingested and made available in the active event table. Messages at the specified level (and at more severe levels) are ingested. For more information about log levels, see Setting log level. For more information, check LOG_LEVEL docs.
- MultiStatement intCount 
- Number of statements to execute when using the multi-statement capability. For more information, check MULTISTATEMENTCOUNT docs.
- Name string
- Specifies the identifier for the task; must be unique for the database and schema in which the task is created. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- NoorderSequence boolAs Default 
- Specifies whether the ORDER or NOORDER property is set by default when you create a new sequence or add a new table column. The ORDER and NOORDER properties determine whether or not the values are generated for the sequence or auto-incremented column in increasing or decreasing order. For more information, check NOORDERSEQUENCEAS_DEFAULT docs.
- OdbcTreat boolDecimal As Int 
- Specifies how ODBC processes columns that have a scale of zero (0). For more information, check ODBCTREATDECIMALASINT docs.
- Parameters
[]TaskParameter Args 
- Outputs the result of SHOW PARAMETERS IN TASKfor the given task.
- QueryTag string
- Optional string that can be used to tag queries and other SQL statements executed within a session. The tags are displayed in the output of the QUERYHISTORY, QUERYHISTORY*BY** functions. For more information, check QUERY_TAG docs.
- QuotedIdentifiers boolIgnore Case 
- Specifies whether letters in double-quoted object identifiers are stored and resolved as uppercase letters. By default, Snowflake preserves the case of alphabetic characters when storing and resolving double-quoted identifiers (see Identifier resolution). You can use this parameter in situations in which third-party applications always use double quotes around identifiers. For more information, check QUOTEDIDENTIFIERSIGNORE_CASE docs.
- RowsPer intResultset 
- Specifies the maximum number of rows returned in a result set. A value of 0 specifies no maximum. For more information, check ROWSPERRESULTSET docs.
- S3StageVpce stringDns Name 
- Specifies the DNS name of an Amazon S3 interface endpoint. Requests sent to the internal stage of an account via AWS PrivateLink for Amazon S3 use this endpoint to connect. For more information, see Accessing Internal stages with dedicated interface endpoints. For more information, check S3STAGEVPCEDNSNAME docs.
- Schedule
TaskSchedule Args 
- The schedule for periodically running the task. This can be a cron or interval in minutes. (Conflicts with finalize and after; when set, one of the sub-fields minutesorusing_cronshould be set)
- Schema string
- The schema in which to create the task. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- SearchPath string
- Specifies the path to search to resolve unqualified object names in queries. For more information, see Name resolution in queries. Comma-separated list of identifiers. An identifier can be a fully or partially qualified schema name. For more information, check SEARCH_PATH docs.
- ShowOutputs []TaskShow Output Args 
- Outputs the result of SHOW TASKSfor the given task.
- SqlStatement string
- Any single SQL statement, or a call to a stored procedure, executed when the task runs.
- Started bool
- Specifies if the task should be started or suspended.
- StatementQueued intTimeout In Seconds 
- Amount of time, in seconds, a SQL statement (query, DDL, DML, etc.) remains queued for a warehouse before it is canceled by the system. This parameter can be used in conjunction with the MAXCONCURRENCYLEVEL parameter to ensure a warehouse is never backlogged. For more information, check STATEMENTQUEUEDTIMEOUTINSECONDS docs.
- StatementTimeout intIn Seconds 
- Amount of time, in seconds, after which a running SQL statement (query, DDL, DML, etc.) is canceled by the system. For more information, check STATEMENTTIMEOUTIN_SECONDS docs.
- StrictJson boolOutput 
- This parameter specifies whether JSON output in a session is compatible with the general standard (as described by http://json.org). By design, Snowflake allows JSON input that contains non-standard values; however, these non-standard values might result in Snowflake outputting JSON that is incompatible with other platforms and languages. This parameter, when enabled, ensures that Snowflake outputs valid/compatible JSON. For more information, check STRICTJSONOUTPUT docs.
- SuspendTask intAfter Num Failures 
- Specifies the number of consecutive failed task runs after which the current task is suspended automatically. The default is 0 (no automatic suspension). For more information, check SUSPENDTASKAFTERNUMFAILURES docs.
- TaskAuto intRetry Attempts 
- Specifies the number of automatic task graph retry attempts. If any task graphs complete in a FAILED state, Snowflake can automatically retry the task graphs from the last task in the graph that failed. For more information, check TASKAUTORETRY_ATTEMPTS docs.
- TimeInput stringFormat 
- Specifies the input format for the TIME data type. For more information, see Date and time input and output formats. Any valid, supported time format or AUTO (AUTO specifies that Snowflake attempts to automatically detect the format of times stored in the system during the session). For more information, check TIMEINPUTFORMAT docs.
- TimeOutput stringFormat 
- Specifies the display format for the TIME data type. For more information, see Date and time input and output formats. For more information, check TIMEOUTPUTFORMAT docs.
- TimestampDay boolIs Always24h 
- Specifies whether the DATEADD function (and its aliases) always consider a day to be exactly 24 hours for expressions that span multiple days. For more information, check TIMESTAMPDAYISALWAYS24H docs.
- TimestampInput stringFormat 
- Specifies the input format for the TIMESTAMP data type alias. For more information, see Date and time input and output formats. Any valid, supported timestamp format or AUTO (AUTO specifies that Snowflake attempts to automatically detect the format of timestamps stored in the system during the session). For more information, check TIMESTAMPINPUTFORMAT docs.
- TimestampLtz stringOutput Format 
- Specifies the display format for the TIMESTAMPLTZ data type. If no format is specified, defaults to OUTPUT*FORMATTIMESTAMP. For more information, see Date and time input and output formats. For more information, check TIMESTAMPLTZOUTPUT*FORMAT docs.
- TimestampNtz stringOutput Format 
- Specifies the display format for the TIMESTAMPNTZ data type. For more information, check NTZOUTPUTFORMAT docsTIMESTAMP.
- TimestampOutput stringFormat 
- Specifies the display format for the TIMESTAMP data type alias. For more information, see Date and time input and output formats. For more information, check TIMESTAMPOUTPUTFORMAT docs.
- TimestampType stringMapping 
- Specifies the TIMESTAMP** variation that the TIMESTAMP data type alias maps to. For more information, check TIMESTAMP*TYPE_MAPPING docs.
- TimestampTz stringOutput Format 
- Specifies the display format for the TIMESTAMPTZ data type. If no format is specified, defaults to OUTPUT*FORMATTIMESTAMP. For more information, see Date and time input and output formats. For more information, check TIMESTAMPTZOUTPUT*FORMAT docs.
- Timezone string
- Specifies the time zone for the session. You can specify a time zone name or a link name from release 2021a of the IANA Time Zone Database (e.g. America/Los_Angeles, Europe/London, UTC, Etc/GMT, etc.). For more information, check TIMEZONE docs.
- TraceLevel string
- Controls how trace events are ingested into the event table. For more information about trace levels, see Setting trace level. For more information, check TRACE_LEVEL docs.
- TransactionAbort boolOn Error 
- Specifies the action to perform when a statement issued within a non-autocommit transaction returns with an error. For more information, check TRANSACTIONABORTON_ERROR docs.
- TransactionDefault stringIsolation Level 
- Specifies the isolation level for transactions in the user session. For more information, check TRANSACTIONDEFAULTISOLATION_LEVEL docs.
- TwoDigit intCentury Start 
- Specifies the “century start” year for 2-digit years (i.e. the earliest year such dates can represent). This parameter prevents ambiguous dates when importing or converting data with the YYdate format component (i.e. years represented as 2 digits). For more information, check TWODIGITCENTURY_START docs.
- UnsupportedDdl stringAction 
- Determines if an unsupported (i.e. non-default) value specified for a constraint property returns an error. For more information, check UNSUPPORTEDDDLACTION docs.
- UseCached boolResult 
- Specifies whether to reuse persisted query results, if available, when a matching query is submitted. For more information, check USECACHEDRESULT docs.
- UserTask stringManaged Initial Warehouse Size 
- Specifies the size of the compute resources to provision for the first run of the task, before a task history is available for Snowflake to determine an ideal size. Once a task has successfully completed a few runs, Snowflake ignores this parameter setting. Valid values are (case-insensitive): %s. (Conflicts with warehouse). For more information about warehouses, see docs. For more information, check USERTASKMANAGEDINITIALWAREHOUSE_SIZE docs.
- UserTask intMinimum Trigger Interval In Seconds 
- Minimum amount of time between Triggered Task executions in seconds For more information, check USERTASKMINIMUMTRIGGERINTERVALINSECONDS docs.
- UserTask intTimeout Ms 
- Specifies the time limit on a single run of the task before it times out (in milliseconds). For more information, check USERTASKTIMEOUT_MS docs.
- Warehouse string
- The warehouse the task will use. Omit this parameter to use Snowflake-managed compute resources for runs of this task. Due to Snowflake limitations warehouse identifier can consist of only upper-cased letters. (Conflicts with usertaskmanagedinitialwarehouse_size) For more information about this resource, see docs.
- WeekOf intYear Policy 
- Specifies how the weeks in a given year are computed. 0: The semantics used are equivalent to the ISO semantics, in which a week belongs to a given year if at least 4 days of that week are in that year.1: January 1 is included in the first week of the year and December 31 is included in the last week of the year. For more information, check WEEKOFYEAR_POLICY docs.
- WeekStart int
- Specifies the first day of the week (used by week-related date functions). 0: Legacy Snowflake behavior is used (i.e. ISO-like semantics).1(Monday) to7(Sunday): All the week-related functions use weeks that start on the specified day of the week. For more information, check WEEK_START docs.
- When string
- Specifies a Boolean SQL expression; multiple conditions joined with AND/OR are supported. When a task is triggered (based on its SCHEDULE or AFTER setting), it validates the conditions of the expression to determine whether to execute. If the conditions of the expression are not met, then the task skips the current run. Any tasks that identify this task as a predecessor also don’t run.
- abortDetached BooleanQuery 
- Specifies the action that Snowflake performs for in-progress queries if connectivity is lost due to abrupt termination of a session (e.g. network outage, browser termination, service interruption). For more information, check ABORTDETACHEDQUERY docs.
- afters List<String>
- Specifies one or more predecessor tasks for the current task. Use this option to create a DAG of tasks or add this task to an existing DAG. A DAG is a series of tasks that starts with a scheduled root task and is linked together by dependencies. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- allowOverlapping StringExecution 
- By default, Snowflake ensures that only one instance of a particular DAG is allowed to run at a time, setting the parameter value to TRUE permits DAG runs to overlap. Available options are: "true" or "false". When the value is not set in the configuration the provider will put "default" there which means to use the Snowflake default for this value.
- autocommit Boolean
- Specifies whether autocommit is enabled for the session. Autocommit determines whether a DML statement, when executed without an active transaction, is automatically committed after the statement successfully completes. For more information, see Transactions. For more information, check AUTOCOMMIT docs.
- binaryInput StringFormat 
- The format of VARCHAR values passed as input to VARCHAR-to-BINARY conversion functions. For more information, see Binary input and output. For more information, check BINARYINPUTFORMAT docs.
- binaryOutput StringFormat 
- The format for VARCHAR values returned as output by BINARY-to-VARCHAR conversion functions. For more information, see Binary input and output. For more information, check BINARYOUTPUTFORMAT docs.
- clientMemory IntegerLimit 
- Parameter that specifies the maximum amount of memory the JDBC driver or ODBC driver should use for the result set from queries (in MB). For more information, check CLIENTMEMORYLIMIT docs.
- clientMetadata BooleanRequest Use Connection Ctx 
- For specific ODBC functions and JDBC methods, this parameter can change the default search scope from all databases/schemas to the current database/schema. The narrower search typically returns fewer rows and executes more quickly. For more information, check CLIENTMETADATAREQUESTUSECONNECTION_CTX docs.
- clientPrefetch IntegerThreads 
- Parameter that specifies the number of threads used by the client to pre-fetch large result sets. The driver will attempt to honor the parameter value, but defines the minimum and maximum values (depending on your system’s resources) to improve performance. For more information, check CLIENTPREFETCHTHREADS docs.
- clientResult IntegerChunk Size 
- Parameter that specifies the maximum size of each set (or chunk) of query results to download (in MB). The JDBC driver downloads query results in chunks. For more information, check CLIENTRESULTCHUNK_SIZE docs.
- clientResult BooleanColumn Case Insensitive 
- Parameter that indicates whether to match column name case-insensitively in ResultSet.get* methods in JDBC. For more information, check CLIENTRESULTCOLUMNCASEINSENSITIVE docs.
- clientSession BooleanKeep Alive 
- Parameter that indicates whether to force a user to log in again after a period of inactivity in the session. For more information, check CLIENTSESSIONKEEP_ALIVE docs.
- clientSession IntegerKeep Alive Heartbeat Frequency 
- Number of seconds in-between client attempts to update the token for the session. For more information, check CLIENTSESSIONKEEPALIVEHEARTBEAT_FREQUENCY docs.
- clientTimestamp StringType Mapping 
- Specifies the TIMESTAMP_* variation to use when binding timestamp variables for JDBC or ODBC applications that use the bind API to load data. For more information, check CLIENTTIMESTAMPTYPE_MAPPING docs.
- comment String
- Specifies a comment for the task.
- config String
- Specifies a string representation of key value pairs that can be accessed by all tasks in the task graph. Must be in JSON format.
- database String
- The database in which to create the task. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- dateInput StringFormat 
- Specifies the input format for the DATE data type. For more information, see Date and time input and output formats. For more information, check DATEINPUTFORMAT docs.
- dateOutput StringFormat 
- Specifies the display format for the DATE data type. For more information, see Date and time input and output formats. For more information, check DATEOUTPUTFORMAT docs.
- enableUnload BooleanPhysical Type Optimization 
- Specifies whether to set the schema for unloaded Parquet files based on the logical column data types (i.e. the types in the unload SQL query or source table) or on the unloaded column values (i.e. the smallest data types and precision that support the values in the output columns of the unload SQL statement or source table). For more information, check ENABLEUNLOADPHYSICALTYPEOPTIMIZATION docs.
- errorIntegration String
- Specifies the name of the notification integration used for error notifications. Due to technical limitations (read more here), avoid using the following characters: |,.,". For more information about this resource, see docs.
- errorOn BooleanNondeterministic Merge 
- Specifies whether to return an error when the MERGE command is used to update or delete a target row that joins multiple source rows and the system cannot determine the action to perform on the target row. For more information, check ERRORONNONDETERMINISTIC_MERGE docs.
- errorOn BooleanNondeterministic Update 
- Specifies whether to return an error when the UPDATE command is used to update a target row that joins multiple source rows and the system cannot determine the action to perform on the target row. For more information, check ERRORONNONDETERMINISTIC_UPDATE docs.
- finalize_ String
- Specifies the name of a root task that the finalizer task is associated with. Finalizer tasks run after all other tasks in the task graph run to completion. You can define the SQL of a finalizer task to handle notifications and the release and cleanup of resources that a task graph uses. For more information, see Release and cleanup of task graphs. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- fullyQualified StringName 
- Fully qualified name of the resource. For more information, see object name resolution.
- geographyOutput StringFormat 
- Display format for GEOGRAPHY values. For more information, check GEOGRAPHYOUTPUTFORMAT docs.
- geometryOutput StringFormat 
- Display format for GEOMETRY values. For more information, check GEOMETRYOUTPUTFORMAT docs.
- jdbcTreat BooleanTimestamp Ntz As Utc 
- Specifies how JDBC processes TIMESTAMPNTZ values. For more information, check TREATTIMESTAMPNTZASUTC docsJDBC.
- jdbcUse BooleanSession Timezone 
- Specifies whether the JDBC Driver uses the time zone of the JVM or the time zone of the session (specified by the TIMEZONE parameter) for the getDate(), getTime(), and getTimestamp() methods of the ResultSet class. For more information, check JDBCUSESESSION_TIMEZONE docs.
- jsonIndent Integer
- Specifies the number of blank spaces to indent each new element in JSON output in the session. Also specifies whether to insert newline characters after each element. For more information, check JSON_INDENT docs.
- lockTimeout Integer
- Number of seconds to wait while trying to lock a resource, before timing out and aborting the statement. For more information, check LOCK_TIMEOUT docs.
- logLevel String
- Specifies the severity level of messages that should be ingested and made available in the active event table. Messages at the specified level (and at more severe levels) are ingested. For more information about log levels, see Setting log level. For more information, check LOG_LEVEL docs.
- multiStatement IntegerCount 
- Number of statements to execute when using the multi-statement capability. For more information, check MULTISTATEMENTCOUNT docs.
- name String
- Specifies the identifier for the task; must be unique for the database and schema in which the task is created. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- noorderSequence BooleanAs Default 
- Specifies whether the ORDER or NOORDER property is set by default when you create a new sequence or add a new table column. The ORDER and NOORDER properties determine whether or not the values are generated for the sequence or auto-incremented column in increasing or decreasing order. For more information, check NOORDERSEQUENCEAS_DEFAULT docs.
- odbcTreat BooleanDecimal As Int 
- Specifies how ODBC processes columns that have a scale of zero (0). For more information, check ODBCTREATDECIMALASINT docs.
- parameters
List<TaskParameter> 
- Outputs the result of SHOW PARAMETERS IN TASKfor the given task.
- queryTag String
- Optional string that can be used to tag queries and other SQL statements executed within a session. The tags are displayed in the output of the QUERYHISTORY, QUERYHISTORY*BY** functions. For more information, check QUERY_TAG docs.
- quotedIdentifiers BooleanIgnore Case 
- Specifies whether letters in double-quoted object identifiers are stored and resolved as uppercase letters. By default, Snowflake preserves the case of alphabetic characters when storing and resolving double-quoted identifiers (see Identifier resolution). You can use this parameter in situations in which third-party applications always use double quotes around identifiers. For more information, check QUOTEDIDENTIFIERSIGNORE_CASE docs.
- rowsPer IntegerResultset 
- Specifies the maximum number of rows returned in a result set. A value of 0 specifies no maximum. For more information, check ROWSPERRESULTSET docs.
- s3StageVpce StringDns Name 
- Specifies the DNS name of an Amazon S3 interface endpoint. Requests sent to the internal stage of an account via AWS PrivateLink for Amazon S3 use this endpoint to connect. For more information, see Accessing Internal stages with dedicated interface endpoints. For more information, check S3STAGEVPCEDNSNAME docs.
- schedule
TaskSchedule 
- The schedule for periodically running the task. This can be a cron or interval in minutes. (Conflicts with finalize and after; when set, one of the sub-fields minutesorusing_cronshould be set)
- schema String
- The schema in which to create the task. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- searchPath String
- Specifies the path to search to resolve unqualified object names in queries. For more information, see Name resolution in queries. Comma-separated list of identifiers. An identifier can be a fully or partially qualified schema name. For more information, check SEARCH_PATH docs.
- showOutputs List<TaskShow Output> 
- Outputs the result of SHOW TASKSfor the given task.
- sqlStatement String
- Any single SQL statement, or a call to a stored procedure, executed when the task runs.
- started Boolean
- Specifies if the task should be started or suspended.
- statementQueued IntegerTimeout In Seconds 
- Amount of time, in seconds, a SQL statement (query, DDL, DML, etc.) remains queued for a warehouse before it is canceled by the system. This parameter can be used in conjunction with the MAXCONCURRENCYLEVEL parameter to ensure a warehouse is never backlogged. For more information, check STATEMENTQUEUEDTIMEOUTINSECONDS docs.
- statementTimeout IntegerIn Seconds 
- Amount of time, in seconds, after which a running SQL statement (query, DDL, DML, etc.) is canceled by the system. For more information, check STATEMENTTIMEOUTIN_SECONDS docs.
- strictJson BooleanOutput 
- This parameter specifies whether JSON output in a session is compatible with the general standard (as described by http://json.org). By design, Snowflake allows JSON input that contains non-standard values; however, these non-standard values might result in Snowflake outputting JSON that is incompatible with other platforms and languages. This parameter, when enabled, ensures that Snowflake outputs valid/compatible JSON. For more information, check STRICTJSONOUTPUT docs.
- suspendTask IntegerAfter Num Failures 
- Specifies the number of consecutive failed task runs after which the current task is suspended automatically. The default is 0 (no automatic suspension). For more information, check SUSPENDTASKAFTERNUMFAILURES docs.
- taskAuto IntegerRetry Attempts 
- Specifies the number of automatic task graph retry attempts. If any task graphs complete in a FAILED state, Snowflake can automatically retry the task graphs from the last task in the graph that failed. For more information, check TASKAUTORETRY_ATTEMPTS docs.
- timeInput StringFormat 
- Specifies the input format for the TIME data type. For more information, see Date and time input and output formats. Any valid, supported time format or AUTO (AUTO specifies that Snowflake attempts to automatically detect the format of times stored in the system during the session). For more information, check TIMEINPUTFORMAT docs.
- timeOutput StringFormat 
- Specifies the display format for the TIME data type. For more information, see Date and time input and output formats. For more information, check TIMEOUTPUTFORMAT docs.
- timestampDay BooleanIs Always24h 
- Specifies whether the DATEADD function (and its aliases) always consider a day to be exactly 24 hours for expressions that span multiple days. For more information, check TIMESTAMPDAYISALWAYS24H docs.
- timestampInput StringFormat 
- Specifies the input format for the TIMESTAMP data type alias. For more information, see Date and time input and output formats. Any valid, supported timestamp format or AUTO (AUTO specifies that Snowflake attempts to automatically detect the format of timestamps stored in the system during the session). For more information, check TIMESTAMPINPUTFORMAT docs.
- timestampLtz StringOutput Format 
- Specifies the display format for the TIMESTAMPLTZ data type. If no format is specified, defaults to OUTPUT*FORMATTIMESTAMP. For more information, see Date and time input and output formats. For more information, check TIMESTAMPLTZOUTPUT*FORMAT docs.
- timestampNtz StringOutput Format 
- Specifies the display format for the TIMESTAMPNTZ data type. For more information, check NTZOUTPUTFORMAT docsTIMESTAMP.
- timestampOutput StringFormat 
- Specifies the display format for the TIMESTAMP data type alias. For more information, see Date and time input and output formats. For more information, check TIMESTAMPOUTPUTFORMAT docs.
- timestampType StringMapping 
- Specifies the TIMESTAMP** variation that the TIMESTAMP data type alias maps to. For more information, check TIMESTAMP*TYPE_MAPPING docs.
- timestampTz StringOutput Format 
- Specifies the display format for the TIMESTAMPTZ data type. If no format is specified, defaults to OUTPUT*FORMATTIMESTAMP. For more information, see Date and time input and output formats. For more information, check TIMESTAMPTZOUTPUT*FORMAT docs.
- timezone String
- Specifies the time zone for the session. You can specify a time zone name or a link name from release 2021a of the IANA Time Zone Database (e.g. America/Los_Angeles, Europe/London, UTC, Etc/GMT, etc.). For more information, check TIMEZONE docs.
- traceLevel String
- Controls how trace events are ingested into the event table. For more information about trace levels, see Setting trace level. For more information, check TRACE_LEVEL docs.
- transactionAbort BooleanOn Error 
- Specifies the action to perform when a statement issued within a non-autocommit transaction returns with an error. For more information, check TRANSACTIONABORTON_ERROR docs.
- transactionDefault StringIsolation Level 
- Specifies the isolation level for transactions in the user session. For more information, check TRANSACTIONDEFAULTISOLATION_LEVEL docs.
- twoDigit IntegerCentury Start 
- Specifies the “century start” year for 2-digit years (i.e. the earliest year such dates can represent). This parameter prevents ambiguous dates when importing or converting data with the YYdate format component (i.e. years represented as 2 digits). For more information, check TWODIGITCENTURY_START docs.
- unsupportedDdl StringAction 
- Determines if an unsupported (i.e. non-default) value specified for a constraint property returns an error. For more information, check UNSUPPORTEDDDLACTION docs.
- useCached BooleanResult 
- Specifies whether to reuse persisted query results, if available, when a matching query is submitted. For more information, check USECACHEDRESULT docs.
- userTask StringManaged Initial Warehouse Size 
- Specifies the size of the compute resources to provision for the first run of the task, before a task history is available for Snowflake to determine an ideal size. Once a task has successfully completed a few runs, Snowflake ignores this parameter setting. Valid values are (case-insensitive): %s. (Conflicts with warehouse). For more information about warehouses, see docs. For more information, check USERTASKMANAGEDINITIALWAREHOUSE_SIZE docs.
- userTask IntegerMinimum Trigger Interval In Seconds 
- Minimum amount of time between Triggered Task executions in seconds For more information, check USERTASKMINIMUMTRIGGERINTERVALINSECONDS docs.
- userTask IntegerTimeout Ms 
- Specifies the time limit on a single run of the task before it times out (in milliseconds). For more information, check USERTASKTIMEOUT_MS docs.
- warehouse String
- The warehouse the task will use. Omit this parameter to use Snowflake-managed compute resources for runs of this task. Due to Snowflake limitations warehouse identifier can consist of only upper-cased letters. (Conflicts with usertaskmanagedinitialwarehouse_size) For more information about this resource, see docs.
- weekOf IntegerYear Policy 
- Specifies how the weeks in a given year are computed. 0: The semantics used are equivalent to the ISO semantics, in which a week belongs to a given year if at least 4 days of that week are in that year.1: January 1 is included in the first week of the year and December 31 is included in the last week of the year. For more information, check WEEKOFYEAR_POLICY docs.
- weekStart Integer
- Specifies the first day of the week (used by week-related date functions). 0: Legacy Snowflake behavior is used (i.e. ISO-like semantics).1(Monday) to7(Sunday): All the week-related functions use weeks that start on the specified day of the week. For more information, check WEEK_START docs.
- when String
- Specifies a Boolean SQL expression; multiple conditions joined with AND/OR are supported. When a task is triggered (based on its SCHEDULE or AFTER setting), it validates the conditions of the expression to determine whether to execute. If the conditions of the expression are not met, then the task skips the current run. Any tasks that identify this task as a predecessor also don’t run.
- abortDetached booleanQuery 
- Specifies the action that Snowflake performs for in-progress queries if connectivity is lost due to abrupt termination of a session (e.g. network outage, browser termination, service interruption). For more information, check ABORTDETACHEDQUERY docs.
- afters string[]
- Specifies one or more predecessor tasks for the current task. Use this option to create a DAG of tasks or add this task to an existing DAG. A DAG is a series of tasks that starts with a scheduled root task and is linked together by dependencies. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- allowOverlapping stringExecution 
- By default, Snowflake ensures that only one instance of a particular DAG is allowed to run at a time, setting the parameter value to TRUE permits DAG runs to overlap. Available options are: "true" or "false". When the value is not set in the configuration the provider will put "default" there which means to use the Snowflake default for this value.
- autocommit boolean
- Specifies whether autocommit is enabled for the session. Autocommit determines whether a DML statement, when executed without an active transaction, is automatically committed after the statement successfully completes. For more information, see Transactions. For more information, check AUTOCOMMIT docs.
- binaryInput stringFormat 
- The format of VARCHAR values passed as input to VARCHAR-to-BINARY conversion functions. For more information, see Binary input and output. For more information, check BINARYINPUTFORMAT docs.
- binaryOutput stringFormat 
- The format for VARCHAR values returned as output by BINARY-to-VARCHAR conversion functions. For more information, see Binary input and output. For more information, check BINARYOUTPUTFORMAT docs.
- clientMemory numberLimit 
- Parameter that specifies the maximum amount of memory the JDBC driver or ODBC driver should use for the result set from queries (in MB). For more information, check CLIENTMEMORYLIMIT docs.
- clientMetadata booleanRequest Use Connection Ctx 
- For specific ODBC functions and JDBC methods, this parameter can change the default search scope from all databases/schemas to the current database/schema. The narrower search typically returns fewer rows and executes more quickly. For more information, check CLIENTMETADATAREQUESTUSECONNECTION_CTX docs.
- clientPrefetch numberThreads 
- Parameter that specifies the number of threads used by the client to pre-fetch large result sets. The driver will attempt to honor the parameter value, but defines the minimum and maximum values (depending on your system’s resources) to improve performance. For more information, check CLIENTPREFETCHTHREADS docs.
- clientResult numberChunk Size 
- Parameter that specifies the maximum size of each set (or chunk) of query results to download (in MB). The JDBC driver downloads query results in chunks. For more information, check CLIENTRESULTCHUNK_SIZE docs.
- clientResult booleanColumn Case Insensitive 
- Parameter that indicates whether to match column name case-insensitively in ResultSet.get* methods in JDBC. For more information, check CLIENTRESULTCOLUMNCASEINSENSITIVE docs.
- clientSession booleanKeep Alive 
- Parameter that indicates whether to force a user to log in again after a period of inactivity in the session. For more information, check CLIENTSESSIONKEEP_ALIVE docs.
- clientSession numberKeep Alive Heartbeat Frequency 
- Number of seconds in-between client attempts to update the token for the session. For more information, check CLIENTSESSIONKEEPALIVEHEARTBEAT_FREQUENCY docs.
- clientTimestamp stringType Mapping 
- Specifies the TIMESTAMP_* variation to use when binding timestamp variables for JDBC or ODBC applications that use the bind API to load data. For more information, check CLIENTTIMESTAMPTYPE_MAPPING docs.
- comment string
- Specifies a comment for the task.
- config string
- Specifies a string representation of key value pairs that can be accessed by all tasks in the task graph. Must be in JSON format.
- database string
- The database in which to create the task. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- dateInput stringFormat 
- Specifies the input format for the DATE data type. For more information, see Date and time input and output formats. For more information, check DATEINPUTFORMAT docs.
- dateOutput stringFormat 
- Specifies the display format for the DATE data type. For more information, see Date and time input and output formats. For more information, check DATEOUTPUTFORMAT docs.
- enableUnload booleanPhysical Type Optimization 
- Specifies whether to set the schema for unloaded Parquet files based on the logical column data types (i.e. the types in the unload SQL query or source table) or on the unloaded column values (i.e. the smallest data types and precision that support the values in the output columns of the unload SQL statement or source table). For more information, check ENABLEUNLOADPHYSICALTYPEOPTIMIZATION docs.
- errorIntegration string
- Specifies the name of the notification integration used for error notifications. Due to technical limitations (read more here), avoid using the following characters: |,.,". For more information about this resource, see docs.
- errorOn booleanNondeterministic Merge 
- Specifies whether to return an error when the MERGE command is used to update or delete a target row that joins multiple source rows and the system cannot determine the action to perform on the target row. For more information, check ERRORONNONDETERMINISTIC_MERGE docs.
- errorOn booleanNondeterministic Update 
- Specifies whether to return an error when the UPDATE command is used to update a target row that joins multiple source rows and the system cannot determine the action to perform on the target row. For more information, check ERRORONNONDETERMINISTIC_UPDATE docs.
- finalize string
- Specifies the name of a root task that the finalizer task is associated with. Finalizer tasks run after all other tasks in the task graph run to completion. You can define the SQL of a finalizer task to handle notifications and the release and cleanup of resources that a task graph uses. For more information, see Release and cleanup of task graphs. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- fullyQualified stringName 
- Fully qualified name of the resource. For more information, see object name resolution.
- geographyOutput stringFormat 
- Display format for GEOGRAPHY values. For more information, check GEOGRAPHYOUTPUTFORMAT docs.
- geometryOutput stringFormat 
- Display format for GEOMETRY values. For more information, check GEOMETRYOUTPUTFORMAT docs.
- jdbcTreat booleanTimestamp Ntz As Utc 
- Specifies how JDBC processes TIMESTAMPNTZ values. For more information, check TREATTIMESTAMPNTZASUTC docsJDBC.
- jdbcUse booleanSession Timezone 
- Specifies whether the JDBC Driver uses the time zone of the JVM or the time zone of the session (specified by the TIMEZONE parameter) for the getDate(), getTime(), and getTimestamp() methods of the ResultSet class. For more information, check JDBCUSESESSION_TIMEZONE docs.
- jsonIndent number
- Specifies the number of blank spaces to indent each new element in JSON output in the session. Also specifies whether to insert newline characters after each element. For more information, check JSON_INDENT docs.
- lockTimeout number
- Number of seconds to wait while trying to lock a resource, before timing out and aborting the statement. For more information, check LOCK_TIMEOUT docs.
- logLevel string
- Specifies the severity level of messages that should be ingested and made available in the active event table. Messages at the specified level (and at more severe levels) are ingested. For more information about log levels, see Setting log level. For more information, check LOG_LEVEL docs.
- multiStatement numberCount 
- Number of statements to execute when using the multi-statement capability. For more information, check MULTISTATEMENTCOUNT docs.
- name string
- Specifies the identifier for the task; must be unique for the database and schema in which the task is created. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- noorderSequence booleanAs Default 
- Specifies whether the ORDER or NOORDER property is set by default when you create a new sequence or add a new table column. The ORDER and NOORDER properties determine whether or not the values are generated for the sequence or auto-incremented column in increasing or decreasing order. For more information, check NOORDERSEQUENCEAS_DEFAULT docs.
- odbcTreat booleanDecimal As Int 
- Specifies how ODBC processes columns that have a scale of zero (0). For more information, check ODBCTREATDECIMALASINT docs.
- parameters
TaskParameter[] 
- Outputs the result of SHOW PARAMETERS IN TASKfor the given task.
- queryTag string
- Optional string that can be used to tag queries and other SQL statements executed within a session. The tags are displayed in the output of the QUERYHISTORY, QUERYHISTORY*BY** functions. For more information, check QUERY_TAG docs.
- quotedIdentifiers booleanIgnore Case 
- Specifies whether letters in double-quoted object identifiers are stored and resolved as uppercase letters. By default, Snowflake preserves the case of alphabetic characters when storing and resolving double-quoted identifiers (see Identifier resolution). You can use this parameter in situations in which third-party applications always use double quotes around identifiers. For more information, check QUOTEDIDENTIFIERSIGNORE_CASE docs.
- rowsPer numberResultset 
- Specifies the maximum number of rows returned in a result set. A value of 0 specifies no maximum. For more information, check ROWSPERRESULTSET docs.
- s3StageVpce stringDns Name 
- Specifies the DNS name of an Amazon S3 interface endpoint. Requests sent to the internal stage of an account via AWS PrivateLink for Amazon S3 use this endpoint to connect. For more information, see Accessing Internal stages with dedicated interface endpoints. For more information, check S3STAGEVPCEDNSNAME docs.
- schedule
TaskSchedule 
- The schedule for periodically running the task. This can be a cron or interval in minutes. (Conflicts with finalize and after; when set, one of the sub-fields minutesorusing_cronshould be set)
- schema string
- The schema in which to create the task. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- searchPath string
- Specifies the path to search to resolve unqualified object names in queries. For more information, see Name resolution in queries. Comma-separated list of identifiers. An identifier can be a fully or partially qualified schema name. For more information, check SEARCH_PATH docs.
- showOutputs TaskShow Output[] 
- Outputs the result of SHOW TASKSfor the given task.
- sqlStatement string
- Any single SQL statement, or a call to a stored procedure, executed when the task runs.
- started boolean
- Specifies if the task should be started or suspended.
- statementQueued numberTimeout In Seconds 
- Amount of time, in seconds, a SQL statement (query, DDL, DML, etc.) remains queued for a warehouse before it is canceled by the system. This parameter can be used in conjunction with the MAXCONCURRENCYLEVEL parameter to ensure a warehouse is never backlogged. For more information, check STATEMENTQUEUEDTIMEOUTINSECONDS docs.
- statementTimeout numberIn Seconds 
- Amount of time, in seconds, after which a running SQL statement (query, DDL, DML, etc.) is canceled by the system. For more information, check STATEMENTTIMEOUTIN_SECONDS docs.
- strictJson booleanOutput 
- This parameter specifies whether JSON output in a session is compatible with the general standard (as described by http://json.org). By design, Snowflake allows JSON input that contains non-standard values; however, these non-standard values might result in Snowflake outputting JSON that is incompatible with other platforms and languages. This parameter, when enabled, ensures that Snowflake outputs valid/compatible JSON. For more information, check STRICTJSONOUTPUT docs.
- suspendTask numberAfter Num Failures 
- Specifies the number of consecutive failed task runs after which the current task is suspended automatically. The default is 0 (no automatic suspension). For more information, check SUSPENDTASKAFTERNUMFAILURES docs.
- taskAuto numberRetry Attempts 
- Specifies the number of automatic task graph retry attempts. If any task graphs complete in a FAILED state, Snowflake can automatically retry the task graphs from the last task in the graph that failed. For more information, check TASKAUTORETRY_ATTEMPTS docs.
- timeInput stringFormat 
- Specifies the input format for the TIME data type. For more information, see Date and time input and output formats. Any valid, supported time format or AUTO (AUTO specifies that Snowflake attempts to automatically detect the format of times stored in the system during the session). For more information, check TIMEINPUTFORMAT docs.
- timeOutput stringFormat 
- Specifies the display format for the TIME data type. For more information, see Date and time input and output formats. For more information, check TIMEOUTPUTFORMAT docs.
- timestampDay booleanIs Always24h 
- Specifies whether the DATEADD function (and its aliases) always consider a day to be exactly 24 hours for expressions that span multiple days. For more information, check TIMESTAMPDAYISALWAYS24H docs.
- timestampInput stringFormat 
- Specifies the input format for the TIMESTAMP data type alias. For more information, see Date and time input and output formats. Any valid, supported timestamp format or AUTO (AUTO specifies that Snowflake attempts to automatically detect the format of timestamps stored in the system during the session). For more information, check TIMESTAMPINPUTFORMAT docs.
- timestampLtz stringOutput Format 
- Specifies the display format for the TIMESTAMPLTZ data type. If no format is specified, defaults to OUTPUT*FORMATTIMESTAMP. For more information, see Date and time input and output formats. For more information, check TIMESTAMPLTZOUTPUT*FORMAT docs.
- timestampNtz stringOutput Format 
- Specifies the display format for the TIMESTAMPNTZ data type. For more information, check NTZOUTPUTFORMAT docsTIMESTAMP.
- timestampOutput stringFormat 
- Specifies the display format for the TIMESTAMP data type alias. For more information, see Date and time input and output formats. For more information, check TIMESTAMPOUTPUTFORMAT docs.
- timestampType stringMapping 
- Specifies the TIMESTAMP** variation that the TIMESTAMP data type alias maps to. For more information, check TIMESTAMP*TYPE_MAPPING docs.
- timestampTz stringOutput Format 
- Specifies the display format for the TIMESTAMPTZ data type. If no format is specified, defaults to OUTPUT*FORMATTIMESTAMP. For more information, see Date and time input and output formats. For more information, check TIMESTAMPTZOUTPUT*FORMAT docs.
- timezone string
- Specifies the time zone for the session. You can specify a time zone name or a link name from release 2021a of the IANA Time Zone Database (e.g. America/Los_Angeles, Europe/London, UTC, Etc/GMT, etc.). For more information, check TIMEZONE docs.
- traceLevel string
- Controls how trace events are ingested into the event table. For more information about trace levels, see Setting trace level. For more information, check TRACE_LEVEL docs.
- transactionAbort booleanOn Error 
- Specifies the action to perform when a statement issued within a non-autocommit transaction returns with an error. For more information, check TRANSACTIONABORTON_ERROR docs.
- transactionDefault stringIsolation Level 
- Specifies the isolation level for transactions in the user session. For more information, check TRANSACTIONDEFAULTISOLATION_LEVEL docs.
- twoDigit numberCentury Start 
- Specifies the “century start” year for 2-digit years (i.e. the earliest year such dates can represent). This parameter prevents ambiguous dates when importing or converting data with the YYdate format component (i.e. years represented as 2 digits). For more information, check TWODIGITCENTURY_START docs.
- unsupportedDdl stringAction 
- Determines if an unsupported (i.e. non-default) value specified for a constraint property returns an error. For more information, check UNSUPPORTEDDDLACTION docs.
- useCached booleanResult 
- Specifies whether to reuse persisted query results, if available, when a matching query is submitted. For more information, check USECACHEDRESULT docs.
- userTask stringManaged Initial Warehouse Size 
- Specifies the size of the compute resources to provision for the first run of the task, before a task history is available for Snowflake to determine an ideal size. Once a task has successfully completed a few runs, Snowflake ignores this parameter setting. Valid values are (case-insensitive): %s. (Conflicts with warehouse). For more information about warehouses, see docs. For more information, check USERTASKMANAGEDINITIALWAREHOUSE_SIZE docs.
- userTask numberMinimum Trigger Interval In Seconds 
- Minimum amount of time between Triggered Task executions in seconds For more information, check USERTASKMINIMUMTRIGGERINTERVALINSECONDS docs.
- userTask numberTimeout Ms 
- Specifies the time limit on a single run of the task before it times out (in milliseconds). For more information, check USERTASKTIMEOUT_MS docs.
- warehouse string
- The warehouse the task will use. Omit this parameter to use Snowflake-managed compute resources for runs of this task. Due to Snowflake limitations warehouse identifier can consist of only upper-cased letters. (Conflicts with usertaskmanagedinitialwarehouse_size) For more information about this resource, see docs.
- weekOf numberYear Policy 
- Specifies how the weeks in a given year are computed. 0: The semantics used are equivalent to the ISO semantics, in which a week belongs to a given year if at least 4 days of that week are in that year.1: January 1 is included in the first week of the year and December 31 is included in the last week of the year. For more information, check WEEKOFYEAR_POLICY docs.
- weekStart number
- Specifies the first day of the week (used by week-related date functions). 0: Legacy Snowflake behavior is used (i.e. ISO-like semantics).1(Monday) to7(Sunday): All the week-related functions use weeks that start on the specified day of the week. For more information, check WEEK_START docs.
- when string
- Specifies a Boolean SQL expression; multiple conditions joined with AND/OR are supported. When a task is triggered (based on its SCHEDULE or AFTER setting), it validates the conditions of the expression to determine whether to execute. If the conditions of the expression are not met, then the task skips the current run. Any tasks that identify this task as a predecessor also don’t run.
- abort_detached_ boolquery 
- Specifies the action that Snowflake performs for in-progress queries if connectivity is lost due to abrupt termination of a session (e.g. network outage, browser termination, service interruption). For more information, check ABORTDETACHEDQUERY docs.
- afters Sequence[str]
- Specifies one or more predecessor tasks for the current task. Use this option to create a DAG of tasks or add this task to an existing DAG. A DAG is a series of tasks that starts with a scheduled root task and is linked together by dependencies. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- allow_overlapping_ strexecution 
- By default, Snowflake ensures that only one instance of a particular DAG is allowed to run at a time, setting the parameter value to TRUE permits DAG runs to overlap. Available options are: "true" or "false". When the value is not set in the configuration the provider will put "default" there which means to use the Snowflake default for this value.
- autocommit bool
- Specifies whether autocommit is enabled for the session. Autocommit determines whether a DML statement, when executed without an active transaction, is automatically committed after the statement successfully completes. For more information, see Transactions. For more information, check AUTOCOMMIT docs.
- binary_input_ strformat 
- The format of VARCHAR values passed as input to VARCHAR-to-BINARY conversion functions. For more information, see Binary input and output. For more information, check BINARYINPUTFORMAT docs.
- binary_output_ strformat 
- The format for VARCHAR values returned as output by BINARY-to-VARCHAR conversion functions. For more information, see Binary input and output. For more information, check BINARYOUTPUTFORMAT docs.
- client_memory_ intlimit 
- Parameter that specifies the maximum amount of memory the JDBC driver or ODBC driver should use for the result set from queries (in MB). For more information, check CLIENTMEMORYLIMIT docs.
- client_metadata_ boolrequest_ use_ connection_ ctx 
- For specific ODBC functions and JDBC methods, this parameter can change the default search scope from all databases/schemas to the current database/schema. The narrower search typically returns fewer rows and executes more quickly. For more information, check CLIENTMETADATAREQUESTUSECONNECTION_CTX docs.
- client_prefetch_ intthreads 
- Parameter that specifies the number of threads used by the client to pre-fetch large result sets. The driver will attempt to honor the parameter value, but defines the minimum and maximum values (depending on your system’s resources) to improve performance. For more information, check CLIENTPREFETCHTHREADS docs.
- client_result_ intchunk_ size 
- Parameter that specifies the maximum size of each set (or chunk) of query results to download (in MB). The JDBC driver downloads query results in chunks. For more information, check CLIENTRESULTCHUNK_SIZE docs.
- client_result_ boolcolumn_ case_ insensitive 
- Parameter that indicates whether to match column name case-insensitively in ResultSet.get* methods in JDBC. For more information, check CLIENTRESULTCOLUMNCASEINSENSITIVE docs.
- client_session_ boolkeep_ alive 
- Parameter that indicates whether to force a user to log in again after a period of inactivity in the session. For more information, check CLIENTSESSIONKEEP_ALIVE docs.
- client_session_ intkeep_ alive_ heartbeat_ frequency 
- Number of seconds in-between client attempts to update the token for the session. For more information, check CLIENTSESSIONKEEPALIVEHEARTBEAT_FREQUENCY docs.
- client_timestamp_ strtype_ mapping 
- Specifies the TIMESTAMP_* variation to use when binding timestamp variables for JDBC or ODBC applications that use the bind API to load data. For more information, check CLIENTTIMESTAMPTYPE_MAPPING docs.
- comment str
- Specifies a comment for the task.
- config str
- Specifies a string representation of key value pairs that can be accessed by all tasks in the task graph. Must be in JSON format.
- database str
- The database in which to create the task. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- date_input_ strformat 
- Specifies the input format for the DATE data type. For more information, see Date and time input and output formats. For more information, check DATEINPUTFORMAT docs.
- date_output_ strformat 
- Specifies the display format for the DATE data type. For more information, see Date and time input and output formats. For more information, check DATEOUTPUTFORMAT docs.
- enable_unload_ boolphysical_ type_ optimization 
- Specifies whether to set the schema for unloaded Parquet files based on the logical column data types (i.e. the types in the unload SQL query or source table) or on the unloaded column values (i.e. the smallest data types and precision that support the values in the output columns of the unload SQL statement or source table). For more information, check ENABLEUNLOADPHYSICALTYPEOPTIMIZATION docs.
- error_integration str
- Specifies the name of the notification integration used for error notifications. Due to technical limitations (read more here), avoid using the following characters: |,.,". For more information about this resource, see docs.
- error_on_ boolnondeterministic_ merge 
- Specifies whether to return an error when the MERGE command is used to update or delete a target row that joins multiple source rows and the system cannot determine the action to perform on the target row. For more information, check ERRORONNONDETERMINISTIC_MERGE docs.
- error_on_ boolnondeterministic_ update 
- Specifies whether to return an error when the UPDATE command is used to update a target row that joins multiple source rows and the system cannot determine the action to perform on the target row. For more information, check ERRORONNONDETERMINISTIC_UPDATE docs.
- finalize str
- Specifies the name of a root task that the finalizer task is associated with. Finalizer tasks run after all other tasks in the task graph run to completion. You can define the SQL of a finalizer task to handle notifications and the release and cleanup of resources that a task graph uses. For more information, see Release and cleanup of task graphs. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- fully_qualified_ strname 
- Fully qualified name of the resource. For more information, see object name resolution.
- geography_output_ strformat 
- Display format for GEOGRAPHY values. For more information, check GEOGRAPHYOUTPUTFORMAT docs.
- geometry_output_ strformat 
- Display format for GEOMETRY values. For more information, check GEOMETRYOUTPUTFORMAT docs.
- jdbc_treat_ booltimestamp_ ntz_ as_ utc 
- Specifies how JDBC processes TIMESTAMPNTZ values. For more information, check TREATTIMESTAMPNTZASUTC docsJDBC.
- jdbc_use_ boolsession_ timezone 
- Specifies whether the JDBC Driver uses the time zone of the JVM or the time zone of the session (specified by the TIMEZONE parameter) for the getDate(), getTime(), and getTimestamp() methods of the ResultSet class. For more information, check JDBCUSESESSION_TIMEZONE docs.
- json_indent int
- Specifies the number of blank spaces to indent each new element in JSON output in the session. Also specifies whether to insert newline characters after each element. For more information, check JSON_INDENT docs.
- lock_timeout int
- Number of seconds to wait while trying to lock a resource, before timing out and aborting the statement. For more information, check LOCK_TIMEOUT docs.
- log_level str
- Specifies the severity level of messages that should be ingested and made available in the active event table. Messages at the specified level (and at more severe levels) are ingested. For more information about log levels, see Setting log level. For more information, check LOG_LEVEL docs.
- multi_statement_ intcount 
- Number of statements to execute when using the multi-statement capability. For more information, check MULTISTATEMENTCOUNT docs.
- name str
- Specifies the identifier for the task; must be unique for the database and schema in which the task is created. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- noorder_sequence_ boolas_ default 
- Specifies whether the ORDER or NOORDER property is set by default when you create a new sequence or add a new table column. The ORDER and NOORDER properties determine whether or not the values are generated for the sequence or auto-incremented column in increasing or decreasing order. For more information, check NOORDERSEQUENCEAS_DEFAULT docs.
- odbc_treat_ booldecimal_ as_ int 
- Specifies how ODBC processes columns that have a scale of zero (0). For more information, check ODBCTREATDECIMALASINT docs.
- parameters
Sequence[TaskParameter Args] 
- Outputs the result of SHOW PARAMETERS IN TASKfor the given task.
- query_tag str
- Optional string that can be used to tag queries and other SQL statements executed within a session. The tags are displayed in the output of the QUERYHISTORY, QUERYHISTORY*BY** functions. For more information, check QUERY_TAG docs.
- quoted_identifiers_ boolignore_ case 
- Specifies whether letters in double-quoted object identifiers are stored and resolved as uppercase letters. By default, Snowflake preserves the case of alphabetic characters when storing and resolving double-quoted identifiers (see Identifier resolution). You can use this parameter in situations in which third-party applications always use double quotes around identifiers. For more information, check QUOTEDIDENTIFIERSIGNORE_CASE docs.
- rows_per_ intresultset 
- Specifies the maximum number of rows returned in a result set. A value of 0 specifies no maximum. For more information, check ROWSPERRESULTSET docs.
- s3_stage_ strvpce_ dns_ name 
- Specifies the DNS name of an Amazon S3 interface endpoint. Requests sent to the internal stage of an account via AWS PrivateLink for Amazon S3 use this endpoint to connect. For more information, see Accessing Internal stages with dedicated interface endpoints. For more information, check S3STAGEVPCEDNSNAME docs.
- schedule
TaskSchedule Args 
- The schedule for periodically running the task. This can be a cron or interval in minutes. (Conflicts with finalize and after; when set, one of the sub-fields minutesorusing_cronshould be set)
- schema str
- The schema in which to create the task. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- search_path str
- Specifies the path to search to resolve unqualified object names in queries. For more information, see Name resolution in queries. Comma-separated list of identifiers. An identifier can be a fully or partially qualified schema name. For more information, check SEARCH_PATH docs.
- show_outputs Sequence[TaskShow Output Args] 
- Outputs the result of SHOW TASKSfor the given task.
- sql_statement str
- Any single SQL statement, or a call to a stored procedure, executed when the task runs.
- started bool
- Specifies if the task should be started or suspended.
- statement_queued_ inttimeout_ in_ seconds 
- Amount of time, in seconds, a SQL statement (query, DDL, DML, etc.) remains queued for a warehouse before it is canceled by the system. This parameter can be used in conjunction with the MAXCONCURRENCYLEVEL parameter to ensure a warehouse is never backlogged. For more information, check STATEMENTQUEUEDTIMEOUTINSECONDS docs.
- statement_timeout_ intin_ seconds 
- Amount of time, in seconds, after which a running SQL statement (query, DDL, DML, etc.) is canceled by the system. For more information, check STATEMENTTIMEOUTIN_SECONDS docs.
- strict_json_ booloutput 
- This parameter specifies whether JSON output in a session is compatible with the general standard (as described by http://json.org). By design, Snowflake allows JSON input that contains non-standard values; however, these non-standard values might result in Snowflake outputting JSON that is incompatible with other platforms and languages. This parameter, when enabled, ensures that Snowflake outputs valid/compatible JSON. For more information, check STRICTJSONOUTPUT docs.
- suspend_task_ intafter_ num_ failures 
- Specifies the number of consecutive failed task runs after which the current task is suspended automatically. The default is 0 (no automatic suspension). For more information, check SUSPENDTASKAFTERNUMFAILURES docs.
- task_auto_ intretry_ attempts 
- Specifies the number of automatic task graph retry attempts. If any task graphs complete in a FAILED state, Snowflake can automatically retry the task graphs from the last task in the graph that failed. For more information, check TASKAUTORETRY_ATTEMPTS docs.
- time_input_ strformat 
- Specifies the input format for the TIME data type. For more information, see Date and time input and output formats. Any valid, supported time format or AUTO (AUTO specifies that Snowflake attempts to automatically detect the format of times stored in the system during the session). For more information, check TIMEINPUTFORMAT docs.
- time_output_ strformat 
- Specifies the display format for the TIME data type. For more information, see Date and time input and output formats. For more information, check TIMEOUTPUTFORMAT docs.
- timestamp_day_ boolis_ always24h 
- Specifies whether the DATEADD function (and its aliases) always consider a day to be exactly 24 hours for expressions that span multiple days. For more information, check TIMESTAMPDAYISALWAYS24H docs.
- timestamp_input_ strformat 
- Specifies the input format for the TIMESTAMP data type alias. For more information, see Date and time input and output formats. Any valid, supported timestamp format or AUTO (AUTO specifies that Snowflake attempts to automatically detect the format of timestamps stored in the system during the session). For more information, check TIMESTAMPINPUTFORMAT docs.
- timestamp_ltz_ stroutput_ format 
- Specifies the display format for the TIMESTAMPLTZ data type. If no format is specified, defaults to OUTPUT*FORMATTIMESTAMP. For more information, see Date and time input and output formats. For more information, check TIMESTAMPLTZOUTPUT*FORMAT docs.
- timestamp_ntz_ stroutput_ format 
- Specifies the display format for the TIMESTAMPNTZ data type. For more information, check NTZOUTPUTFORMAT docsTIMESTAMP.
- timestamp_output_ strformat 
- Specifies the display format for the TIMESTAMP data type alias. For more information, see Date and time input and output formats. For more information, check TIMESTAMPOUTPUTFORMAT docs.
- timestamp_type_ strmapping 
- Specifies the TIMESTAMP** variation that the TIMESTAMP data type alias maps to. For more information, check TIMESTAMP*TYPE_MAPPING docs.
- timestamp_tz_ stroutput_ format 
- Specifies the display format for the TIMESTAMPTZ data type. If no format is specified, defaults to OUTPUT*FORMATTIMESTAMP. For more information, see Date and time input and output formats. For more information, check TIMESTAMPTZOUTPUT*FORMAT docs.
- timezone str
- Specifies the time zone for the session. You can specify a time zone name or a link name from release 2021a of the IANA Time Zone Database (e.g. America/Los_Angeles, Europe/London, UTC, Etc/GMT, etc.). For more information, check TIMEZONE docs.
- trace_level str
- Controls how trace events are ingested into the event table. For more information about trace levels, see Setting trace level. For more information, check TRACE_LEVEL docs.
- transaction_abort_ boolon_ error 
- Specifies the action to perform when a statement issued within a non-autocommit transaction returns with an error. For more information, check TRANSACTIONABORTON_ERROR docs.
- transaction_default_ strisolation_ level 
- Specifies the isolation level for transactions in the user session. For more information, check TRANSACTIONDEFAULTISOLATION_LEVEL docs.
- two_digit_ intcentury_ start 
- Specifies the “century start” year for 2-digit years (i.e. the earliest year such dates can represent). This parameter prevents ambiguous dates when importing or converting data with the YYdate format component (i.e. years represented as 2 digits). For more information, check TWODIGITCENTURY_START docs.
- unsupported_ddl_ straction 
- Determines if an unsupported (i.e. non-default) value specified for a constraint property returns an error. For more information, check UNSUPPORTEDDDLACTION docs.
- use_cached_ boolresult 
- Specifies whether to reuse persisted query results, if available, when a matching query is submitted. For more information, check USECACHEDRESULT docs.
- user_task_ strmanaged_ initial_ warehouse_ size 
- Specifies the size of the compute resources to provision for the first run of the task, before a task history is available for Snowflake to determine an ideal size. Once a task has successfully completed a few runs, Snowflake ignores this parameter setting. Valid values are (case-insensitive): %s. (Conflicts with warehouse). For more information about warehouses, see docs. For more information, check USERTASKMANAGEDINITIALWAREHOUSE_SIZE docs.
- user_task_ intminimum_ trigger_ interval_ in_ seconds 
- Minimum amount of time between Triggered Task executions in seconds For more information, check USERTASKMINIMUMTRIGGERINTERVALINSECONDS docs.
- user_task_ inttimeout_ ms 
- Specifies the time limit on a single run of the task before it times out (in milliseconds). For more information, check USERTASKTIMEOUT_MS docs.
- warehouse str
- The warehouse the task will use. Omit this parameter to use Snowflake-managed compute resources for runs of this task. Due to Snowflake limitations warehouse identifier can consist of only upper-cased letters. (Conflicts with usertaskmanagedinitialwarehouse_size) For more information about this resource, see docs.
- week_of_ intyear_ policy 
- Specifies how the weeks in a given year are computed. 0: The semantics used are equivalent to the ISO semantics, in which a week belongs to a given year if at least 4 days of that week are in that year.1: January 1 is included in the first week of the year and December 31 is included in the last week of the year. For more information, check WEEKOFYEAR_POLICY docs.
- week_start int
- Specifies the first day of the week (used by week-related date functions). 0: Legacy Snowflake behavior is used (i.e. ISO-like semantics).1(Monday) to7(Sunday): All the week-related functions use weeks that start on the specified day of the week. For more information, check WEEK_START docs.
- when str
- Specifies a Boolean SQL expression; multiple conditions joined with AND/OR are supported. When a task is triggered (based on its SCHEDULE or AFTER setting), it validates the conditions of the expression to determine whether to execute. If the conditions of the expression are not met, then the task skips the current run. Any tasks that identify this task as a predecessor also don’t run.
- abortDetached BooleanQuery 
- Specifies the action that Snowflake performs for in-progress queries if connectivity is lost due to abrupt termination of a session (e.g. network outage, browser termination, service interruption). For more information, check ABORTDETACHEDQUERY docs.
- afters List<String>
- Specifies one or more predecessor tasks for the current task. Use this option to create a DAG of tasks or add this task to an existing DAG. A DAG is a series of tasks that starts with a scheduled root task and is linked together by dependencies. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- allowOverlapping StringExecution 
- By default, Snowflake ensures that only one instance of a particular DAG is allowed to run at a time, setting the parameter value to TRUE permits DAG runs to overlap. Available options are: "true" or "false". When the value is not set in the configuration the provider will put "default" there which means to use the Snowflake default for this value.
- autocommit Boolean
- Specifies whether autocommit is enabled for the session. Autocommit determines whether a DML statement, when executed without an active transaction, is automatically committed after the statement successfully completes. For more information, see Transactions. For more information, check AUTOCOMMIT docs.
- binaryInput StringFormat 
- The format of VARCHAR values passed as input to VARCHAR-to-BINARY conversion functions. For more information, see Binary input and output. For more information, check BINARYINPUTFORMAT docs.
- binaryOutput StringFormat 
- The format for VARCHAR values returned as output by BINARY-to-VARCHAR conversion functions. For more information, see Binary input and output. For more information, check BINARYOUTPUTFORMAT docs.
- clientMemory NumberLimit 
- Parameter that specifies the maximum amount of memory the JDBC driver or ODBC driver should use for the result set from queries (in MB). For more information, check CLIENTMEMORYLIMIT docs.
- clientMetadata BooleanRequest Use Connection Ctx 
- For specific ODBC functions and JDBC methods, this parameter can change the default search scope from all databases/schemas to the current database/schema. The narrower search typically returns fewer rows and executes more quickly. For more information, check CLIENTMETADATAREQUESTUSECONNECTION_CTX docs.
- clientPrefetch NumberThreads 
- Parameter that specifies the number of threads used by the client to pre-fetch large result sets. The driver will attempt to honor the parameter value, but defines the minimum and maximum values (depending on your system’s resources) to improve performance. For more information, check CLIENTPREFETCHTHREADS docs.
- clientResult NumberChunk Size 
- Parameter that specifies the maximum size of each set (or chunk) of query results to download (in MB). The JDBC driver downloads query results in chunks. For more information, check CLIENTRESULTCHUNK_SIZE docs.
- clientResult BooleanColumn Case Insensitive 
- Parameter that indicates whether to match column name case-insensitively in ResultSet.get* methods in JDBC. For more information, check CLIENTRESULTCOLUMNCASEINSENSITIVE docs.
- clientSession BooleanKeep Alive 
- Parameter that indicates whether to force a user to log in again after a period of inactivity in the session. For more information, check CLIENTSESSIONKEEP_ALIVE docs.
- clientSession NumberKeep Alive Heartbeat Frequency 
- Number of seconds in-between client attempts to update the token for the session. For more information, check CLIENTSESSIONKEEPALIVEHEARTBEAT_FREQUENCY docs.
- clientTimestamp StringType Mapping 
- Specifies the TIMESTAMP_* variation to use when binding timestamp variables for JDBC or ODBC applications that use the bind API to load data. For more information, check CLIENTTIMESTAMPTYPE_MAPPING docs.
- comment String
- Specifies a comment for the task.
- config String
- Specifies a string representation of key value pairs that can be accessed by all tasks in the task graph. Must be in JSON format.
- database String
- The database in which to create the task. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- dateInput StringFormat 
- Specifies the input format for the DATE data type. For more information, see Date and time input and output formats. For more information, check DATEINPUTFORMAT docs.
- dateOutput StringFormat 
- Specifies the display format for the DATE data type. For more information, see Date and time input and output formats. For more information, check DATEOUTPUTFORMAT docs.
- enableUnload BooleanPhysical Type Optimization 
- Specifies whether to set the schema for unloaded Parquet files based on the logical column data types (i.e. the types in the unload SQL query or source table) or on the unloaded column values (i.e. the smallest data types and precision that support the values in the output columns of the unload SQL statement or source table). For more information, check ENABLEUNLOADPHYSICALTYPEOPTIMIZATION docs.
- errorIntegration String
- Specifies the name of the notification integration used for error notifications. Due to technical limitations (read more here), avoid using the following characters: |,.,". For more information about this resource, see docs.
- errorOn BooleanNondeterministic Merge 
- Specifies whether to return an error when the MERGE command is used to update or delete a target row that joins multiple source rows and the system cannot determine the action to perform on the target row. For more information, check ERRORONNONDETERMINISTIC_MERGE docs.
- errorOn BooleanNondeterministic Update 
- Specifies whether to return an error when the UPDATE command is used to update a target row that joins multiple source rows and the system cannot determine the action to perform on the target row. For more information, check ERRORONNONDETERMINISTIC_UPDATE docs.
- finalize String
- Specifies the name of a root task that the finalizer task is associated with. Finalizer tasks run after all other tasks in the task graph run to completion. You can define the SQL of a finalizer task to handle notifications and the release and cleanup of resources that a task graph uses. For more information, see Release and cleanup of task graphs. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- fullyQualified StringName 
- Fully qualified name of the resource. For more information, see object name resolution.
- geographyOutput StringFormat 
- Display format for GEOGRAPHY values. For more information, check GEOGRAPHYOUTPUTFORMAT docs.
- geometryOutput StringFormat 
- Display format for GEOMETRY values. For more information, check GEOMETRYOUTPUTFORMAT docs.
- jdbcTreat BooleanTimestamp Ntz As Utc 
- Specifies how JDBC processes TIMESTAMPNTZ values. For more information, check TREATTIMESTAMPNTZASUTC docsJDBC.
- jdbcUse BooleanSession Timezone 
- Specifies whether the JDBC Driver uses the time zone of the JVM or the time zone of the session (specified by the TIMEZONE parameter) for the getDate(), getTime(), and getTimestamp() methods of the ResultSet class. For more information, check JDBCUSESESSION_TIMEZONE docs.
- jsonIndent Number
- Specifies the number of blank spaces to indent each new element in JSON output in the session. Also specifies whether to insert newline characters after each element. For more information, check JSON_INDENT docs.
- lockTimeout Number
- Number of seconds to wait while trying to lock a resource, before timing out and aborting the statement. For more information, check LOCK_TIMEOUT docs.
- logLevel String
- Specifies the severity level of messages that should be ingested and made available in the active event table. Messages at the specified level (and at more severe levels) are ingested. For more information about log levels, see Setting log level. For more information, check LOG_LEVEL docs.
- multiStatement NumberCount 
- Number of statements to execute when using the multi-statement capability. For more information, check MULTISTATEMENTCOUNT docs.
- name String
- Specifies the identifier for the task; must be unique for the database and schema in which the task is created. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- noorderSequence BooleanAs Default 
- Specifies whether the ORDER or NOORDER property is set by default when you create a new sequence or add a new table column. The ORDER and NOORDER properties determine whether or not the values are generated for the sequence or auto-incremented column in increasing or decreasing order. For more information, check NOORDERSEQUENCEAS_DEFAULT docs.
- odbcTreat BooleanDecimal As Int 
- Specifies how ODBC processes columns that have a scale of zero (0). For more information, check ODBCTREATDECIMALASINT docs.
- parameters List<Property Map>
- Outputs the result of SHOW PARAMETERS IN TASKfor the given task.
- queryTag String
- Optional string that can be used to tag queries and other SQL statements executed within a session. The tags are displayed in the output of the QUERYHISTORY, QUERYHISTORY*BY** functions. For more information, check QUERY_TAG docs.
- quotedIdentifiers BooleanIgnore Case 
- Specifies whether letters in double-quoted object identifiers are stored and resolved as uppercase letters. By default, Snowflake preserves the case of alphabetic characters when storing and resolving double-quoted identifiers (see Identifier resolution). You can use this parameter in situations in which third-party applications always use double quotes around identifiers. For more information, check QUOTEDIDENTIFIERSIGNORE_CASE docs.
- rowsPer NumberResultset 
- Specifies the maximum number of rows returned in a result set. A value of 0 specifies no maximum. For more information, check ROWSPERRESULTSET docs.
- s3StageVpce StringDns Name 
- Specifies the DNS name of an Amazon S3 interface endpoint. Requests sent to the internal stage of an account via AWS PrivateLink for Amazon S3 use this endpoint to connect. For more information, see Accessing Internal stages with dedicated interface endpoints. For more information, check S3STAGEVPCEDNSNAME docs.
- schedule Property Map
- The schedule for periodically running the task. This can be a cron or interval in minutes. (Conflicts with finalize and after; when set, one of the sub-fields minutesorusing_cronshould be set)
- schema String
- The schema in which to create the task. Due to technical limitations (read more here), avoid using the following characters: |,.,".
- searchPath String
- Specifies the path to search to resolve unqualified object names in queries. For more information, see Name resolution in queries. Comma-separated list of identifiers. An identifier can be a fully or partially qualified schema name. For more information, check SEARCH_PATH docs.
- showOutputs List<Property Map>
- Outputs the result of SHOW TASKSfor the given task.
- sqlStatement String
- Any single SQL statement, or a call to a stored procedure, executed when the task runs.
- started Boolean
- Specifies if the task should be started or suspended.
- statementQueued NumberTimeout In Seconds 
- Amount of time, in seconds, a SQL statement (query, DDL, DML, etc.) remains queued for a warehouse before it is canceled by the system. This parameter can be used in conjunction with the MAXCONCURRENCYLEVEL parameter to ensure a warehouse is never backlogged. For more information, check STATEMENTQUEUEDTIMEOUTINSECONDS docs.
- statementTimeout NumberIn Seconds 
- Amount of time, in seconds, after which a running SQL statement (query, DDL, DML, etc.) is canceled by the system. For more information, check STATEMENTTIMEOUTIN_SECONDS docs.
- strictJson BooleanOutput 
- This parameter specifies whether JSON output in a session is compatible with the general standard (as described by http://json.org). By design, Snowflake allows JSON input that contains non-standard values; however, these non-standard values might result in Snowflake outputting JSON that is incompatible with other platforms and languages. This parameter, when enabled, ensures that Snowflake outputs valid/compatible JSON. For more information, check STRICTJSONOUTPUT docs.
- suspendTask NumberAfter Num Failures 
- Specifies the number of consecutive failed task runs after which the current task is suspended automatically. The default is 0 (no automatic suspension). For more information, check SUSPENDTASKAFTERNUMFAILURES docs.
- taskAuto NumberRetry Attempts 
- Specifies the number of automatic task graph retry attempts. If any task graphs complete in a FAILED state, Snowflake can automatically retry the task graphs from the last task in the graph that failed. For more information, check TASKAUTORETRY_ATTEMPTS docs.
- timeInput StringFormat 
- Specifies the input format for the TIME data type. For more information, see Date and time input and output formats. Any valid, supported time format or AUTO (AUTO specifies that Snowflake attempts to automatically detect the format of times stored in the system during the session). For more information, check TIMEINPUTFORMAT docs.
- timeOutput StringFormat 
- Specifies the display format for the TIME data type. For more information, see Date and time input and output formats. For more information, check TIMEOUTPUTFORMAT docs.
- timestampDay BooleanIs Always24h 
- Specifies whether the DATEADD function (and its aliases) always consider a day to be exactly 24 hours for expressions that span multiple days. For more information, check TIMESTAMPDAYISALWAYS24H docs.
- timestampInput StringFormat 
- Specifies the input format for the TIMESTAMP data type alias. For more information, see Date and time input and output formats. Any valid, supported timestamp format or AUTO (AUTO specifies that Snowflake attempts to automatically detect the format of timestamps stored in the system during the session). For more information, check TIMESTAMPINPUTFORMAT docs.
- timestampLtz StringOutput Format 
- Specifies the display format for the TIMESTAMPLTZ data type. If no format is specified, defaults to OUTPUT*FORMATTIMESTAMP. For more information, see Date and time input and output formats. For more information, check TIMESTAMPLTZOUTPUT*FORMAT docs.
- timestampNtz StringOutput Format 
- Specifies the display format for the TIMESTAMPNTZ data type. For more information, check NTZOUTPUTFORMAT docsTIMESTAMP.
- timestampOutput StringFormat 
- Specifies the display format for the TIMESTAMP data type alias. For more information, see Date and time input and output formats. For more information, check TIMESTAMPOUTPUTFORMAT docs.
- timestampType StringMapping 
- Specifies the TIMESTAMP** variation that the TIMESTAMP data type alias maps to. For more information, check TIMESTAMP*TYPE_MAPPING docs.
- timestampTz StringOutput Format 
- Specifies the display format for the TIMESTAMPTZ data type. If no format is specified, defaults to OUTPUT*FORMATTIMESTAMP. For more information, see Date and time input and output formats. For more information, check TIMESTAMPTZOUTPUT*FORMAT docs.
- timezone String
- Specifies the time zone for the session. You can specify a time zone name or a link name from release 2021a of the IANA Time Zone Database (e.g. America/Los_Angeles, Europe/London, UTC, Etc/GMT, etc.). For more information, check TIMEZONE docs.
- traceLevel String
- Controls how trace events are ingested into the event table. For more information about trace levels, see Setting trace level. For more information, check TRACE_LEVEL docs.
- transactionAbort BooleanOn Error 
- Specifies the action to perform when a statement issued within a non-autocommit transaction returns with an error. For more information, check TRANSACTIONABORTON_ERROR docs.
- transactionDefault StringIsolation Level 
- Specifies the isolation level for transactions in the user session. For more information, check TRANSACTIONDEFAULTISOLATION_LEVEL docs.
- twoDigit NumberCentury Start 
- Specifies the “century start” year for 2-digit years (i.e. the earliest year such dates can represent). This parameter prevents ambiguous dates when importing or converting data with the YYdate format component (i.e. years represented as 2 digits). For more information, check TWODIGITCENTURY_START docs.
- unsupportedDdl StringAction 
- Determines if an unsupported (i.e. non-default) value specified for a constraint property returns an error. For more information, check UNSUPPORTEDDDLACTION docs.
- useCached BooleanResult 
- Specifies whether to reuse persisted query results, if available, when a matching query is submitted. For more information, check USECACHEDRESULT docs.
- userTask StringManaged Initial Warehouse Size 
- Specifies the size of the compute resources to provision for the first run of the task, before a task history is available for Snowflake to determine an ideal size. Once a task has successfully completed a few runs, Snowflake ignores this parameter setting. Valid values are (case-insensitive): %s. (Conflicts with warehouse). For more information about warehouses, see docs. For more information, check USERTASKMANAGEDINITIALWAREHOUSE_SIZE docs.
- userTask NumberMinimum Trigger Interval In Seconds 
- Minimum amount of time between Triggered Task executions in seconds For more information, check USERTASKMINIMUMTRIGGERINTERVALINSECONDS docs.
- userTask NumberTimeout Ms 
- Specifies the time limit on a single run of the task before it times out (in milliseconds). For more information, check USERTASKTIMEOUT_MS docs.
- warehouse String
- The warehouse the task will use. Omit this parameter to use Snowflake-managed compute resources for runs of this task. Due to Snowflake limitations warehouse identifier can consist of only upper-cased letters. (Conflicts with usertaskmanagedinitialwarehouse_size) For more information about this resource, see docs.
- weekOf NumberYear Policy 
- Specifies how the weeks in a given year are computed. 0: The semantics used are equivalent to the ISO semantics, in which a week belongs to a given year if at least 4 days of that week are in that year.1: January 1 is included in the first week of the year and December 31 is included in the last week of the year. For more information, check WEEKOFYEAR_POLICY docs.
- weekStart Number
- Specifies the first day of the week (used by week-related date functions). 0: Legacy Snowflake behavior is used (i.e. ISO-like semantics).1(Monday) to7(Sunday): All the week-related functions use weeks that start on the specified day of the week. For more information, check WEEK_START docs.
- when String
- Specifies a Boolean SQL expression; multiple conditions joined with AND/OR are supported. When a task is triggered (based on its SCHEDULE or AFTER setting), it validates the conditions of the expression to determine whether to execute. If the conditions of the expression are not met, then the task skips the current run. Any tasks that identify this task as a predecessor also don’t run.
Supporting Types
TaskParameter, TaskParameterArgs    
- AbortDetached List<TaskQueries Parameter Abort Detached Query> 
- Autocommits
List<TaskParameter Autocommit> 
- BinaryInput List<TaskFormats Parameter Binary Input Format> 
- BinaryOutput List<TaskFormats Parameter Binary Output Format> 
- ClientMemory List<TaskLimits Parameter Client Memory Limit> 
- ClientMetadata List<TaskRequest Use Connection Ctxes Parameter Client Metadata Request Use Connection Ctx> 
- ClientPrefetch List<TaskThreads Parameter Client Prefetch Thread> 
- ClientResult List<TaskChunk Sizes Parameter Client Result Chunk Size> 
- ClientResult List<TaskColumn Case Insensitives Parameter Client Result Column Case Insensitive> 
- ClientSession List<TaskKeep Alive Heartbeat Frequencies Parameter Client Session Keep Alive Heartbeat Frequency> 
- ClientSession List<TaskKeep Alives Parameter Client Session Keep Alife> 
- ClientTimestamp List<TaskType Mappings Parameter Client Timestamp Type Mapping> 
- DateInput List<TaskFormats Parameter Date Input Format> 
- DateOutput List<TaskFormats Parameter Date Output Format> 
- EnableUnload List<TaskPhysical Type Optimizations Parameter Enable Unload Physical Type Optimization> 
- ErrorOn List<TaskNondeterministic Merges Parameter Error On Nondeterministic Merge> 
- ErrorOn List<TaskNondeterministic Updates Parameter Error On Nondeterministic Update> 
- GeographyOutput List<TaskFormats Parameter Geography Output Format> 
- GeometryOutput List<TaskFormats Parameter Geometry Output Format> 
- JdbcTreat List<TaskTimestamp Ntz As Utcs Parameter Jdbc Treat Timestamp Ntz As Utc> 
- JdbcUse List<TaskSession Timezones Parameter Jdbc Use Session Timezone> 
- JsonIndents List<TaskParameter Json Indent> 
- LockTimeouts List<TaskParameter Lock Timeout> 
- LogLevels List<TaskParameter Log Level> 
- MultiStatement List<TaskCounts Parameter Multi Statement Count> 
- NoorderSequence List<TaskAs Defaults Parameter Noorder Sequence As Default> 
- OdbcTreat List<TaskDecimal As Ints Parameter Odbc Treat Decimal As Int> 
- 
List<TaskParameter Query Tag> 
- QuotedIdentifiers List<TaskIgnore Cases Parameter Quoted Identifiers Ignore Case> 
- RowsPer List<TaskResultsets Parameter Rows Per Resultset> 
- S3StageVpce List<TaskDns Names Parameter S3Stage Vpce Dns Name> 
- SearchPaths List<TaskParameter Search Path> 
- StatementQueued List<TaskTimeout In Seconds Parameter Statement Queued Timeout In Second> 
- StatementTimeout List<TaskIn Seconds Parameter Statement Timeout In Second> 
- StrictJson List<TaskOutputs Parameter Strict Json Output> 
- SuspendTask List<TaskAfter Num Failures Parameter Suspend Task After Num Failure> 
- TaskAuto List<TaskRetry Attempts Parameter Task Auto Retry Attempt> 
- TimeInput List<TaskFormats Parameter Time Input Format> 
- TimeOutput List<TaskFormats Parameter Time Output Format> 
- TimestampDay List<TaskIs Always24hs Parameter Timestamp Day Is Always24h> 
- TimestampInput List<TaskFormats Parameter Timestamp Input Format> 
- TimestampLtz List<TaskOutput Formats Parameter Timestamp Ltz Output Format> 
- TimestampNtz List<TaskOutput Formats Parameter Timestamp Ntz Output Format> 
- TimestampOutput List<TaskFormats Parameter Timestamp Output Format> 
- TimestampType List<TaskMappings Parameter Timestamp Type Mapping> 
- TimestampTz List<TaskOutput Formats Parameter Timestamp Tz Output Format> 
- Timezones
List<TaskParameter Timezone> 
- TraceLevels List<TaskParameter Trace Level> 
- TransactionAbort List<TaskOn Errors Parameter Transaction Abort On Error> 
- TransactionDefault List<TaskIsolation Levels Parameter Transaction Default Isolation Level> 
- TwoDigit List<TaskCentury Starts Parameter Two Digit Century Start> 
- UnsupportedDdl List<TaskActions Parameter Unsupported Ddl Action> 
- UseCached List<TaskResults Parameter Use Cached Result> 
- UserTask List<TaskManaged Initial Warehouse Sizes Parameter User Task Managed Initial Warehouse Size> 
- UserTask List<TaskMinimum Trigger Interval In Seconds Parameter User Task Minimum Trigger Interval In Second> 
- UserTask List<TaskTimeout Ms Parameter User Task Timeout M> 
- WeekOf List<TaskYear Policies Parameter Week Of Year Policy> 
- WeekStarts List<TaskParameter Week Start> 
- AbortDetached []TaskQueries Parameter Abort Detached Query 
- Autocommits
[]TaskParameter Autocommit 
- BinaryInput []TaskFormats Parameter Binary Input Format 
- BinaryOutput []TaskFormats Parameter Binary Output Format 
- ClientMemory []TaskLimits Parameter Client Memory Limit 
- ClientMetadata []TaskRequest Use Connection Ctxes Parameter Client Metadata Request Use Connection Ctx 
- ClientPrefetch []TaskThreads Parameter Client Prefetch Thread 
- ClientResult []TaskChunk Sizes Parameter Client Result Chunk Size 
- ClientResult []TaskColumn Case Insensitives Parameter Client Result Column Case Insensitive 
- ClientSession []TaskKeep Alive Heartbeat Frequencies Parameter Client Session Keep Alive Heartbeat Frequency 
- ClientSession []TaskKeep Alives Parameter Client Session Keep Alife 
- ClientTimestamp []TaskType Mappings Parameter Client Timestamp Type Mapping 
- DateInput []TaskFormats Parameter Date Input Format 
- DateOutput []TaskFormats Parameter Date Output Format 
- EnableUnload []TaskPhysical Type Optimizations Parameter Enable Unload Physical Type Optimization 
- ErrorOn []TaskNondeterministic Merges Parameter Error On Nondeterministic Merge 
- ErrorOn []TaskNondeterministic Updates Parameter Error On Nondeterministic Update 
- GeographyOutput []TaskFormats Parameter Geography Output Format 
- GeometryOutput []TaskFormats Parameter Geometry Output Format 
- JdbcTreat []TaskTimestamp Ntz As Utcs Parameter Jdbc Treat Timestamp Ntz As Utc 
- JdbcUse []TaskSession Timezones Parameter Jdbc Use Session Timezone 
- JsonIndents []TaskParameter Json Indent 
- LockTimeouts []TaskParameter Lock Timeout 
- LogLevels []TaskParameter Log Level 
- MultiStatement []TaskCounts Parameter Multi Statement Count 
- NoorderSequence []TaskAs Defaults Parameter Noorder Sequence As Default 
- OdbcTreat []TaskDecimal As Ints Parameter Odbc Treat Decimal As Int 
- 
[]TaskParameter Query Tag 
- QuotedIdentifiers []TaskIgnore Cases Parameter Quoted Identifiers Ignore Case 
- RowsPer []TaskResultsets Parameter Rows Per Resultset 
- S3StageVpce []TaskDns Names Parameter S3Stage Vpce Dns Name 
- SearchPaths []TaskParameter Search Path 
- StatementQueued []TaskTimeout In Seconds Parameter Statement Queued Timeout In Second 
- StatementTimeout []TaskIn Seconds Parameter Statement Timeout In Second 
- StrictJson []TaskOutputs Parameter Strict Json Output 
- SuspendTask []TaskAfter Num Failures Parameter Suspend Task After Num Failure 
- TaskAuto []TaskRetry Attempts Parameter Task Auto Retry Attempt 
- TimeInput []TaskFormats Parameter Time Input Format 
- TimeOutput []TaskFormats Parameter Time Output Format 
- TimestampDay []TaskIs Always24hs Parameter Timestamp Day Is Always24h 
- TimestampInput []TaskFormats Parameter Timestamp Input Format 
- TimestampLtz []TaskOutput Formats Parameter Timestamp Ltz Output Format 
- TimestampNtz []TaskOutput Formats Parameter Timestamp Ntz Output Format 
- TimestampOutput []TaskFormats Parameter Timestamp Output Format 
- TimestampType []TaskMappings Parameter Timestamp Type Mapping 
- TimestampTz []TaskOutput Formats Parameter Timestamp Tz Output Format 
- Timezones
[]TaskParameter Timezone 
- TraceLevels []TaskParameter Trace Level 
- TransactionAbort []TaskOn Errors Parameter Transaction Abort On Error 
- TransactionDefault []TaskIsolation Levels Parameter Transaction Default Isolation Level 
- TwoDigit []TaskCentury Starts Parameter Two Digit Century Start 
- UnsupportedDdl []TaskActions Parameter Unsupported Ddl Action 
- UseCached []TaskResults Parameter Use Cached Result 
- UserTask []TaskManaged Initial Warehouse Sizes Parameter User Task Managed Initial Warehouse Size 
- UserTask []TaskMinimum Trigger Interval In Seconds Parameter User Task Minimum Trigger Interval In Second 
- UserTask []TaskTimeout Ms Parameter User Task Timeout M 
- WeekOf []TaskYear Policies Parameter Week Of Year Policy 
- WeekStarts []TaskParameter Week Start 
- abortDetached List<TaskQueries Parameter Abort Detached Query> 
- autocommits
List<TaskParameter Autocommit> 
- binaryInput List<TaskFormats Parameter Binary Input Format> 
- binaryOutput List<TaskFormats Parameter Binary Output Format> 
- clientMemory List<TaskLimits Parameter Client Memory Limit> 
- clientMetadata List<TaskRequest Use Connection Ctxes Parameter Client Metadata Request Use Connection Ctx> 
- clientPrefetch List<TaskThreads Parameter Client Prefetch Thread> 
- clientResult List<TaskChunk Sizes Parameter Client Result Chunk Size> 
- clientResult List<TaskColumn Case Insensitives Parameter Client Result Column Case Insensitive> 
- clientSession List<TaskKeep Alive Heartbeat Frequencies Parameter Client Session Keep Alive Heartbeat Frequency> 
- clientSession List<TaskKeep Alives Parameter Client Session Keep Alife> 
- clientTimestamp List<TaskType Mappings Parameter Client Timestamp Type Mapping> 
- dateInput List<TaskFormats Parameter Date Input Format> 
- dateOutput List<TaskFormats Parameter Date Output Format> 
- enableUnload List<TaskPhysical Type Optimizations Parameter Enable Unload Physical Type Optimization> 
- errorOn List<TaskNondeterministic Merges Parameter Error On Nondeterministic Merge> 
- errorOn List<TaskNondeterministic Updates Parameter Error On Nondeterministic Update> 
- geographyOutput List<TaskFormats Parameter Geography Output Format> 
- geometryOutput List<TaskFormats Parameter Geometry Output Format> 
- jdbcTreat List<TaskTimestamp Ntz As Utcs Parameter Jdbc Treat Timestamp Ntz As Utc> 
- jdbcUse List<TaskSession Timezones Parameter Jdbc Use Session Timezone> 
- jsonIndents List<TaskParameter Json Indent> 
- lockTimeouts List<TaskParameter Lock Timeout> 
- logLevels List<TaskParameter Log Level> 
- multiStatement List<TaskCounts Parameter Multi Statement Count> 
- noorderSequence List<TaskAs Defaults Parameter Noorder Sequence As Default> 
- odbcTreat List<TaskDecimal As Ints Parameter Odbc Treat Decimal As Int> 
- 
List<TaskParameter Query Tag> 
- quotedIdentifiers List<TaskIgnore Cases Parameter Quoted Identifiers Ignore Case> 
- rowsPer List<TaskResultsets Parameter Rows Per Resultset> 
- s3StageVpce List<TaskDns Names Parameter S3Stage Vpce Dns Name> 
- searchPaths List<TaskParameter Search Path> 
- statementQueued List<TaskTimeout In Seconds Parameter Statement Queued Timeout In Second> 
- statementTimeout List<TaskIn Seconds Parameter Statement Timeout In Second> 
- strictJson List<TaskOutputs Parameter Strict Json Output> 
- suspendTask List<TaskAfter Num Failures Parameter Suspend Task After Num Failure> 
- taskAuto List<TaskRetry Attempts Parameter Task Auto Retry Attempt> 
- timeInput List<TaskFormats Parameter Time Input Format> 
- timeOutput List<TaskFormats Parameter Time Output Format> 
- timestampDay List<TaskIs Always24hs Parameter Timestamp Day Is Always24h> 
- timestampInput List<TaskFormats Parameter Timestamp Input Format> 
- timestampLtz List<TaskOutput Formats Parameter Timestamp Ltz Output Format> 
- timestampNtz List<TaskOutput Formats Parameter Timestamp Ntz Output Format> 
- timestampOutput List<TaskFormats Parameter Timestamp Output Format> 
- timestampType List<TaskMappings Parameter Timestamp Type Mapping> 
- timestampTz List<TaskOutput Formats Parameter Timestamp Tz Output Format> 
- timezones
List<TaskParameter Timezone> 
- traceLevels List<TaskParameter Trace Level> 
- transactionAbort List<TaskOn Errors Parameter Transaction Abort On Error> 
- transactionDefault List<TaskIsolation Levels Parameter Transaction Default Isolation Level> 
- twoDigit List<TaskCentury Starts Parameter Two Digit Century Start> 
- unsupportedDdl List<TaskActions Parameter Unsupported Ddl Action> 
- useCached List<TaskResults Parameter Use Cached Result> 
- userTask List<TaskManaged Initial Warehouse Sizes Parameter User Task Managed Initial Warehouse Size> 
- userTask List<TaskMinimum Trigger Interval In Seconds Parameter User Task Minimum Trigger Interval In Second> 
- userTask List<TaskTimeout Ms Parameter User Task Timeout M> 
- weekOf List<TaskYear Policies Parameter Week Of Year Policy> 
- weekStarts List<TaskParameter Week Start> 
- abortDetached TaskQueries Parameter Abort Detached Query[] 
- autocommits
TaskParameter Autocommit[] 
- binaryInput TaskFormats Parameter Binary Input Format[] 
- binaryOutput TaskFormats Parameter Binary Output Format[] 
- clientMemory TaskLimits Parameter Client Memory Limit[] 
- clientMetadata TaskRequest Use Connection Ctxes Parameter Client Metadata Request Use Connection Ctx[] 
- clientPrefetch TaskThreads Parameter Client Prefetch Thread[] 
- clientResult TaskChunk Sizes Parameter Client Result Chunk Size[] 
- clientResult TaskColumn Case Insensitives Parameter Client Result Column Case Insensitive[] 
- clientSession TaskKeep Alive Heartbeat Frequencies Parameter Client Session Keep Alive Heartbeat Frequency[] 
- clientSession TaskKeep Alives Parameter Client Session Keep Alife[] 
- clientTimestamp TaskType Mappings Parameter Client Timestamp Type Mapping[] 
- dateInput TaskFormats Parameter Date Input Format[] 
- dateOutput TaskFormats Parameter Date Output Format[] 
- enableUnload TaskPhysical Type Optimizations Parameter Enable Unload Physical Type Optimization[] 
- errorOn TaskNondeterministic Merges Parameter Error On Nondeterministic Merge[] 
- errorOn TaskNondeterministic Updates Parameter Error On Nondeterministic Update[] 
- geographyOutput TaskFormats Parameter Geography Output Format[] 
- geometryOutput TaskFormats Parameter Geometry Output Format[] 
- jdbcTreat TaskTimestamp Ntz As Utcs Parameter Jdbc Treat Timestamp Ntz As Utc[] 
- jdbcUse TaskSession Timezones Parameter Jdbc Use Session Timezone[] 
- jsonIndents TaskParameter Json Indent[] 
- lockTimeouts TaskParameter Lock Timeout[] 
- logLevels TaskParameter Log Level[] 
- multiStatement TaskCounts Parameter Multi Statement Count[] 
- noorderSequence TaskAs Defaults Parameter Noorder Sequence As Default[] 
- odbcTreat TaskDecimal As Ints Parameter Odbc Treat Decimal As Int[] 
- 
TaskParameter Query Tag[] 
- quotedIdentifiers TaskIgnore Cases Parameter Quoted Identifiers Ignore Case[] 
- rowsPer TaskResultsets Parameter Rows Per Resultset[] 
- s3StageVpce TaskDns Names Parameter S3Stage Vpce Dns Name[] 
- searchPaths TaskParameter Search Path[] 
- statementQueued TaskTimeout In Seconds Parameter Statement Queued Timeout In Second[] 
- statementTimeout TaskIn Seconds Parameter Statement Timeout In Second[] 
- strictJson TaskOutputs Parameter Strict Json Output[] 
- suspendTask TaskAfter Num Failures Parameter Suspend Task After Num Failure[] 
- taskAuto TaskRetry Attempts Parameter Task Auto Retry Attempt[] 
- timeInput TaskFormats Parameter Time Input Format[] 
- timeOutput TaskFormats Parameter Time Output Format[] 
- timestampDay TaskIs Always24hs Parameter Timestamp Day Is Always24h[] 
- timestampInput TaskFormats Parameter Timestamp Input Format[] 
- timestampLtz TaskOutput Formats Parameter Timestamp Ltz Output Format[] 
- timestampNtz TaskOutput Formats Parameter Timestamp Ntz Output Format[] 
- timestampOutput TaskFormats Parameter Timestamp Output Format[] 
- timestampType TaskMappings Parameter Timestamp Type Mapping[] 
- timestampTz TaskOutput Formats Parameter Timestamp Tz Output Format[] 
- timezones
TaskParameter Timezone[] 
- traceLevels TaskParameter Trace Level[] 
- transactionAbort TaskOn Errors Parameter Transaction Abort On Error[] 
- transactionDefault TaskIsolation Levels Parameter Transaction Default Isolation Level[] 
- twoDigit TaskCentury Starts Parameter Two Digit Century Start[] 
- unsupportedDdl TaskActions Parameter Unsupported Ddl Action[] 
- useCached TaskResults Parameter Use Cached Result[] 
- userTask TaskManaged Initial Warehouse Sizes Parameter User Task Managed Initial Warehouse Size[] 
- userTask TaskMinimum Trigger Interval In Seconds Parameter User Task Minimum Trigger Interval In Second[] 
- userTask TaskTimeout Ms Parameter User Task Timeout M[] 
- weekOf TaskYear Policies Parameter Week Of Year Policy[] 
- weekStarts TaskParameter Week Start[] 
- abort_detached_ Sequence[Taskqueries Parameter Abort Detached Query] 
- autocommits
Sequence[TaskParameter Autocommit] 
- binary_input_ Sequence[Taskformats Parameter Binary Input Format] 
- binary_output_ Sequence[Taskformats Parameter Binary Output Format] 
- client_memory_ Sequence[Tasklimits Parameter Client Memory Limit] 
- client_metadata_ Sequence[Taskrequest_ use_ connection_ ctxes Parameter Client Metadata Request Use Connection Ctx] 
- client_prefetch_ Sequence[Taskthreads Parameter Client Prefetch Thread] 
- client_result_ Sequence[Taskchunk_ sizes Parameter Client Result Chunk Size] 
- client_result_ Sequence[Taskcolumn_ case_ insensitives Parameter Client Result Column Case Insensitive] 
- client_session_ Sequence[Taskkeep_ alive_ heartbeat_ frequencies Parameter Client Session Keep Alive Heartbeat Frequency] 
- client_session_ Sequence[Taskkeep_ alives Parameter Client Session Keep Alife] 
- client_timestamp_ Sequence[Tasktype_ mappings Parameter Client Timestamp Type Mapping] 
- date_input_ Sequence[Taskformats Parameter Date Input Format] 
- date_output_ Sequence[Taskformats Parameter Date Output Format] 
- enable_unload_ Sequence[Taskphysical_ type_ optimizations Parameter Enable Unload Physical Type Optimization] 
- error_on_ Sequence[Tasknondeterministic_ merges Parameter Error On Nondeterministic Merge] 
- error_on_ Sequence[Tasknondeterministic_ updates Parameter Error On Nondeterministic Update] 
- geography_output_ Sequence[Taskformats Parameter Geography Output Format] 
- geometry_output_ Sequence[Taskformats Parameter Geometry Output Format] 
- jdbc_treat_ Sequence[Tasktimestamp_ ntz_ as_ utcs Parameter Jdbc Treat Timestamp Ntz As Utc] 
- jdbc_use_ Sequence[Tasksession_ timezones Parameter Jdbc Use Session Timezone] 
- json_indents Sequence[TaskParameter Json Indent] 
- lock_timeouts Sequence[TaskParameter Lock Timeout] 
- log_levels Sequence[TaskParameter Log Level] 
- multi_statement_ Sequence[Taskcounts Parameter Multi Statement Count] 
- noorder_sequence_ Sequence[Taskas_ defaults Parameter Noorder Sequence As Default] 
- odbc_treat_ Sequence[Taskdecimal_ as_ ints Parameter Odbc Treat Decimal As Int] 
- 
Sequence[TaskParameter Query Tag] 
- quoted_identifiers_ Sequence[Taskignore_ cases Parameter Quoted Identifiers Ignore Case] 
- rows_per_ Sequence[Taskresultsets Parameter Rows Per Resultset] 
- s3_stage_ Sequence[Taskvpce_ dns_ names Parameter S3Stage Vpce Dns Name] 
- search_paths Sequence[TaskParameter Search Path] 
- statement_queued_ Sequence[Tasktimeout_ in_ seconds Parameter Statement Queued Timeout In Second] 
- statement_timeout_ Sequence[Taskin_ seconds Parameter Statement Timeout In Second] 
- strict_json_ Sequence[Taskoutputs Parameter Strict Json Output] 
- suspend_task_ Sequence[Taskafter_ num_ failures Parameter Suspend Task After Num Failure] 
- task_auto_ Sequence[Taskretry_ attempts Parameter Task Auto Retry Attempt] 
- time_input_ Sequence[Taskformats Parameter Time Input Format] 
- time_output_ Sequence[Taskformats Parameter Time Output Format] 
- timestamp_day_ Sequence[Taskis_ always24hs Parameter Timestamp Day Is Always24h] 
- timestamp_input_ Sequence[Taskformats Parameter Timestamp Input Format] 
- timestamp_ltz_ Sequence[Taskoutput_ formats Parameter Timestamp Ltz Output Format] 
- timestamp_ntz_ Sequence[Taskoutput_ formats Parameter Timestamp Ntz Output Format] 
- timestamp_output_ Sequence[Taskformats Parameter Timestamp Output Format] 
- timestamp_type_ Sequence[Taskmappings Parameter Timestamp Type Mapping] 
- timestamp_tz_ Sequence[Taskoutput_ formats Parameter Timestamp Tz Output Format] 
- timezones
Sequence[TaskParameter Timezone] 
- trace_levels Sequence[TaskParameter Trace Level] 
- transaction_abort_ Sequence[Taskon_ errors Parameter Transaction Abort On Error] 
- transaction_default_ Sequence[Taskisolation_ levels Parameter Transaction Default Isolation Level] 
- two_digit_ Sequence[Taskcentury_ starts Parameter Two Digit Century Start] 
- unsupported_ddl_ Sequence[Taskactions Parameter Unsupported Ddl Action] 
- use_cached_ Sequence[Taskresults Parameter Use Cached Result] 
- user_task_ Sequence[Taskmanaged_ initial_ warehouse_ sizes Parameter User Task Managed Initial Warehouse Size] 
- user_task_ Sequence[Taskminimum_ trigger_ interval_ in_ seconds Parameter User Task Minimum Trigger Interval In Second] 
- user_task_ Sequence[Tasktimeout_ ms Parameter User Task Timeout M] 
- week_of_ Sequence[Taskyear_ policies Parameter Week Of Year Policy] 
- week_starts Sequence[TaskParameter Week Start] 
- abortDetached List<Property Map>Queries 
- autocommits List<Property Map>
- binaryInput List<Property Map>Formats 
- binaryOutput List<Property Map>Formats 
- clientMemory List<Property Map>Limits 
- clientMetadata List<Property Map>Request Use Connection Ctxes 
- clientPrefetch List<Property Map>Threads 
- clientResult List<Property Map>Chunk Sizes 
- clientResult List<Property Map>Column Case Insensitives 
- clientSession List<Property Map>Keep Alive Heartbeat Frequencies 
- clientSession List<Property Map>Keep Alives 
- clientTimestamp List<Property Map>Type Mappings 
- dateInput List<Property Map>Formats 
- dateOutput List<Property Map>Formats 
- enableUnload List<Property Map>Physical Type Optimizations 
- errorOn List<Property Map>Nondeterministic Merges 
- errorOn List<Property Map>Nondeterministic Updates 
- geographyOutput List<Property Map>Formats 
- geometryOutput List<Property Map>Formats 
- jdbcTreat List<Property Map>Timestamp Ntz As Utcs 
- jdbcUse List<Property Map>Session Timezones 
- jsonIndents List<Property Map>
- lockTimeouts List<Property Map>
- logLevels List<Property Map>
- multiStatement List<Property Map>Counts 
- noorderSequence List<Property Map>As Defaults 
- odbcTreat List<Property Map>Decimal As Ints 
- List<Property Map>
- quotedIdentifiers List<Property Map>Ignore Cases 
- rowsPer List<Property Map>Resultsets 
- s3StageVpce List<Property Map>Dns Names 
- searchPaths List<Property Map>
- statementQueued List<Property Map>Timeout In Seconds 
- statementTimeout List<Property Map>In Seconds 
- strictJson List<Property Map>Outputs 
- suspendTask List<Property Map>After Num Failures 
- taskAuto List<Property Map>Retry Attempts 
- timeInput List<Property Map>Formats 
- timeOutput List<Property Map>Formats 
- timestampDay List<Property Map>Is Always24hs 
- timestampInput List<Property Map>Formats 
- timestampLtz List<Property Map>Output Formats 
- timestampNtz List<Property Map>Output Formats 
- timestampOutput List<Property Map>Formats 
- timestampType List<Property Map>Mappings 
- timestampTz List<Property Map>Output Formats 
- timezones List<Property Map>
- traceLevels List<Property Map>
- transactionAbort List<Property Map>On Errors 
- transactionDefault List<Property Map>Isolation Levels 
- twoDigit List<Property Map>Century Starts 
- unsupportedDdl List<Property Map>Actions 
- useCached List<Property Map>Results 
- userTask List<Property Map>Managed Initial Warehouse Sizes 
- userTask List<Property Map>Minimum Trigger Interval In Seconds 
- userTask List<Property Map>Timeout Ms 
- weekOf List<Property Map>Year Policies 
- weekStarts List<Property Map>
TaskParameterAbortDetachedQuery, TaskParameterAbortDetachedQueryArgs          
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterAutocommit, TaskParameterAutocommitArgs      
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterBinaryInputFormat, TaskParameterBinaryInputFormatArgs          
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterBinaryOutputFormat, TaskParameterBinaryOutputFormatArgs          
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterClientMemoryLimit, TaskParameterClientMemoryLimitArgs          
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterClientMetadataRequestUseConnectionCtx, TaskParameterClientMetadataRequestUseConnectionCtxArgs                
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterClientPrefetchThread, TaskParameterClientPrefetchThreadArgs          
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterClientResultChunkSize, TaskParameterClientResultChunkSizeArgs            
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterClientResultColumnCaseInsensitive, TaskParameterClientResultColumnCaseInsensitiveArgs              
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterClientSessionKeepAlife, TaskParameterClientSessionKeepAlifeArgs            
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterClientSessionKeepAliveHeartbeatFrequency, TaskParameterClientSessionKeepAliveHeartbeatFrequencyArgs                
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterClientTimestampTypeMapping, TaskParameterClientTimestampTypeMappingArgs            
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterDateInputFormat, TaskParameterDateInputFormatArgs          
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterDateOutputFormat, TaskParameterDateOutputFormatArgs          
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterEnableUnloadPhysicalTypeOptimization, TaskParameterEnableUnloadPhysicalTypeOptimizationArgs              
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterErrorOnNondeterministicMerge, TaskParameterErrorOnNondeterministicMergeArgs            
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterErrorOnNondeterministicUpdate, TaskParameterErrorOnNondeterministicUpdateArgs            
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterGeographyOutputFormat, TaskParameterGeographyOutputFormatArgs          
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterGeometryOutputFormat, TaskParameterGeometryOutputFormatArgs          
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterJdbcTreatTimestampNtzAsUtc, TaskParameterJdbcTreatTimestampNtzAsUtcArgs                
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterJdbcUseSessionTimezone, TaskParameterJdbcUseSessionTimezoneArgs            
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterJsonIndent, TaskParameterJsonIndentArgs        
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterLockTimeout, TaskParameterLockTimeoutArgs        
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterLogLevel, TaskParameterLogLevelArgs        
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterMultiStatementCount, TaskParameterMultiStatementCountArgs          
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterNoorderSequenceAsDefault, TaskParameterNoorderSequenceAsDefaultArgs            
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterOdbcTreatDecimalAsInt, TaskParameterOdbcTreatDecimalAsIntArgs              
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterQueryTag, TaskParameterQueryTagArgs        
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterQuotedIdentifiersIgnoreCase, TaskParameterQuotedIdentifiersIgnoreCaseArgs            
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterRowsPerResultset, TaskParameterRowsPerResultsetArgs          
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterS3StageVpceDnsName, TaskParameterS3StageVpceDnsNameArgs            
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterSearchPath, TaskParameterSearchPathArgs        
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterStatementQueuedTimeoutInSecond, TaskParameterStatementQueuedTimeoutInSecondArgs              
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterStatementTimeoutInSecond, TaskParameterStatementTimeoutInSecondArgs            
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterStrictJsonOutput, TaskParameterStrictJsonOutputArgs          
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterSuspendTaskAfterNumFailure, TaskParameterSuspendTaskAfterNumFailureArgs              
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterTaskAutoRetryAttempt, TaskParameterTaskAutoRetryAttemptArgs            
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterTimeInputFormat, TaskParameterTimeInputFormatArgs          
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterTimeOutputFormat, TaskParameterTimeOutputFormatArgs          
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterTimestampDayIsAlways24h, TaskParameterTimestampDayIsAlways24hArgs            
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterTimestampInputFormat, TaskParameterTimestampInputFormatArgs          
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterTimestampLtzOutputFormat, TaskParameterTimestampLtzOutputFormatArgs            
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterTimestampNtzOutputFormat, TaskParameterTimestampNtzOutputFormatArgs            
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterTimestampOutputFormat, TaskParameterTimestampOutputFormatArgs          
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterTimestampTypeMapping, TaskParameterTimestampTypeMappingArgs          
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterTimestampTzOutputFormat, TaskParameterTimestampTzOutputFormatArgs            
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterTimezone, TaskParameterTimezoneArgs      
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterTraceLevel, TaskParameterTraceLevelArgs        
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterTransactionAbortOnError, TaskParameterTransactionAbortOnErrorArgs            
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterTransactionDefaultIsolationLevel, TaskParameterTransactionDefaultIsolationLevelArgs            
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterTwoDigitCenturyStart, TaskParameterTwoDigitCenturyStartArgs            
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterUnsupportedDdlAction, TaskParameterUnsupportedDdlActionArgs          
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterUseCachedResult, TaskParameterUseCachedResultArgs          
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterUserTaskManagedInitialWarehouseSize, TaskParameterUserTaskManagedInitialWarehouseSizeArgs                
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterUserTaskMinimumTriggerIntervalInSecond, TaskParameterUserTaskMinimumTriggerIntervalInSecondArgs                  
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterUserTaskTimeoutM, TaskParameterUserTaskTimeoutMArgs            
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterWeekOfYearPolicy, TaskParameterWeekOfYearPolicyArgs            
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskParameterWeekStart, TaskParameterWeekStartArgs        
- Default string
- Description string
- Key string
- Level string
- Value string
- Default string
- Description string
- Key string
- Level string
- Value string
- default_ String
- description String
- key String
- level String
- value String
- default string
- description string
- key string
- level string
- value string
- default str
- description str
- key str
- level str
- value str
- default String
- description String
- key String
- level String
- value String
TaskSchedule, TaskScheduleArgs    
- Minutes int
- Specifies an interval (in minutes) of wait time inserted between runs of the task. Accepts positive integers only. (conflicts with using_cron)
- UsingCron string
- Specifies a cron expression and time zone for periodically running the task. Supports a subset of standard cron utility syntax. (conflicts with minutes)
- Minutes int
- Specifies an interval (in minutes) of wait time inserted between runs of the task. Accepts positive integers only. (conflicts with using_cron)
- UsingCron string
- Specifies a cron expression and time zone for periodically running the task. Supports a subset of standard cron utility syntax. (conflicts with minutes)
- minutes Integer
- Specifies an interval (in minutes) of wait time inserted between runs of the task. Accepts positive integers only. (conflicts with using_cron)
- usingCron String
- Specifies a cron expression and time zone for periodically running the task. Supports a subset of standard cron utility syntax. (conflicts with minutes)
- minutes number
- Specifies an interval (in minutes) of wait time inserted between runs of the task. Accepts positive integers only. (conflicts with using_cron)
- usingCron string
- Specifies a cron expression and time zone for periodically running the task. Supports a subset of standard cron utility syntax. (conflicts with minutes)
- minutes int
- Specifies an interval (in minutes) of wait time inserted between runs of the task. Accepts positive integers only. (conflicts with using_cron)
- using_cron str
- Specifies a cron expression and time zone for periodically running the task. Supports a subset of standard cron utility syntax. (conflicts with minutes)
- minutes Number
- Specifies an interval (in minutes) of wait time inserted between runs of the task. Accepts positive integers only. (conflicts with using_cron)
- usingCron String
- Specifies a cron expression and time zone for periodically running the task. Supports a subset of standard cron utility syntax. (conflicts with minutes)
TaskShowOutput, TaskShowOutputArgs      
- AllowOverlapping boolExecution 
- Budget string
- Comment string
- Condition string
- Config string
- CreatedOn string
- DatabaseName string
- Definition string
- ErrorIntegration string
- Id string
- LastCommitted stringOn 
- LastSuspended stringOn 
- LastSuspended stringReason 
- Name string
- Owner string
- OwnerRole stringType 
- Predecessors List<string>
- Schedule string
- SchemaName string
- State string
- TaskRelations List<TaskShow Output Task Relation> 
- Warehouse string
- AllowOverlapping boolExecution 
- Budget string
- Comment string
- Condition string
- Config string
- CreatedOn string
- DatabaseName string
- Definition string
- ErrorIntegration string
- Id string
- LastCommitted stringOn 
- LastSuspended stringOn 
- LastSuspended stringReason 
- Name string
- Owner string
- OwnerRole stringType 
- Predecessors []string
- Schedule string
- SchemaName string
- State string
- TaskRelations []TaskShow Output Task Relation 
- Warehouse string
- allowOverlapping BooleanExecution 
- budget String
- comment String
- condition String
- config String
- createdOn String
- databaseName String
- definition String
- errorIntegration String
- id String
- lastCommitted StringOn 
- lastSuspended StringOn 
- lastSuspended StringReason 
- name String
- owner String
- ownerRole StringType 
- predecessors List<String>
- schedule String
- schemaName String
- state String
- taskRelations List<TaskShow Output Task Relation> 
- warehouse String
- allowOverlapping booleanExecution 
- budget string
- comment string
- condition string
- config string
- createdOn string
- databaseName string
- definition string
- errorIntegration string
- id string
- lastCommitted stringOn 
- lastSuspended stringOn 
- lastSuspended stringReason 
- name string
- owner string
- ownerRole stringType 
- predecessors string[]
- schedule string
- schemaName string
- state string
- taskRelations TaskShow Output Task Relation[] 
- warehouse string
- allow_overlapping_ boolexecution 
- budget str
- comment str
- condition str
- config str
- created_on str
- database_name str
- definition str
- error_integration str
- id str
- last_committed_ stron 
- last_suspended_ stron 
- last_suspended_ strreason 
- name str
- owner str
- owner_role_ strtype 
- predecessors Sequence[str]
- schedule str
- schema_name str
- state str
- task_relations Sequence[TaskShow Output Task Relation] 
- warehouse str
- allowOverlapping BooleanExecution 
- budget String
- comment String
- condition String
- config String
- createdOn String
- databaseName String
- definition String
- errorIntegration String
- id String
- lastCommitted StringOn 
- lastSuspended StringOn 
- lastSuspended StringReason 
- name String
- owner String
- ownerRole StringType 
- predecessors List<String>
- schedule String
- schemaName String
- state String
- taskRelations List<Property Map>
- warehouse String
TaskShowOutputTaskRelation, TaskShowOutputTaskRelationArgs          
- FinalizedRoot stringTask 
- Finalizer string
- Predecessors List<string>
- FinalizedRoot stringTask 
- Finalizer string
- Predecessors []string
- finalizedRoot StringTask 
- finalizer String
- predecessors List<String>
- finalizedRoot stringTask 
- finalizer string
- predecessors string[]
- finalized_root_ strtask 
- finalizer str
- predecessors Sequence[str]
- finalizedRoot StringTask 
- finalizer String
- predecessors List<String>
Package Details
- Repository
- Snowflake pulumi/pulumi-snowflake
- License
- Apache-2.0
- Notes
- This Pulumi package is based on the snowflakeTerraform Provider.