Close httplib2 connections.
get(name, view=None, x__xgafv=None)
Gets a DataScanJob resource.
list(parent, pageSize=None, pageToken=None, x__xgafv=None)
Lists DataScanJobs under the given DataScan.
Retrieves the next page of results.
close()
Close httplib2 connections.
get(name, view=None, x__xgafv=None)
Gets a DataScanJob resource. Args: name: string, Required. The resource name of the DataScanJob: projects/{project}/locations/{location_id}/dataScans/{data_scan_id}/jobs/{data_scan_job_id} where project refers to a project_id or project_number and location_id refers to a GCP region. (required) view: string, Optional. Select the DataScanJob view to return. Defaults to BASIC. Allowed values DATA_SCAN_JOB_VIEW_UNSPECIFIED - The API will default to the BASIC view. BASIC - Basic view that does not include spec and result. FULL - Include everything. x__xgafv: string, V1 error format. Allowed values 1 - v1 error format 2 - v2 error format Returns: An object of the form: { # A DataScanJob represents an instance of DataScan execution. "dataProfileResult": { # DataProfileResult defines the output of DataProfileScan. Each field of the table will have field type specific profile result. # Output only. The result of the data profile scan. "profile": { # Contains name, type, mode and field type specific profile information. # The profile information per field. "fields": [ # List of fields with structural and profile information for each field. { # A field within a table. "mode": "A String", # The mode of the field. Possible values include: REQUIRED, if it is a required field. NULLABLE, if it is an optional field. REPEATED, if it is a repeated field. "name": "A String", # The name of the field. "profile": { # The profile information for each field type. # Profile information for the corresponding field. "distinctRatio": 3.14, # Ratio of rows with distinct values against total scanned rows. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode. "doubleProfile": { # The profile information for a double type field. # Double type field information. "average": 3.14, # Average of non-null values in the scanned data. NaN, if the field has a NaN. "max": 3.14, # Maximum of non-null values in the scanned data. NaN, if the field has a NaN. "min": 3.14, # Minimum of non-null values in the scanned data. NaN, if the field has a NaN. "quartiles": [ # A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3. 3.14, ], "standardDeviation": 3.14, # Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN. }, "integerProfile": { # The profile information for an integer type field. # Integer type field information. "average": 3.14, # Average of non-null values in the scanned data. NaN, if the field has a NaN. "max": "A String", # Maximum of non-null values in the scanned data. NaN, if the field has a NaN. "min": "A String", # Minimum of non-null values in the scanned data. NaN, if the field has a NaN. "quartiles": [ # A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3. "A String", ], "standardDeviation": 3.14, # Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN. }, "nullRatio": 3.14, # Ratio of rows with null value against total scanned rows. "stringProfile": { # The profile information for a string type field. # String type field information. "averageLength": 3.14, # Average length of non-null values in the scanned data. "maxLength": "A String", # Maximum length of non-null values in the scanned data. "minLength": "A String", # Minimum length of non-null values in the scanned data. }, "topNValues": [ # The list of top N non-null values and number of times they occur in the scanned data. N is 10 or equal to the number of distinct values in the field, whichever is smaller. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode. { # Top N non-null values in the scanned data. "count": "A String", # Count of the corresponding value in the scanned data. "value": "A String", # String value of a top N non-null value. }, ], }, "type": "A String", # The field data type. Possible values include: STRING BYTE INT64 INT32 INT16 DOUBLE FLOAT DECIMAL BOOLEAN BINARY TIMESTAMP DATE TIME NULL RECORD }, ], }, "rowCount": "A String", # The count of rows scanned. "scannedData": { # The data scanned during processing (e.g. in incremental DataScan) # The data scanned for this result. "incrementalField": { # A data range denoted by a pair of start/end values of a field. # The range denoted by values of an incremental field "end": "A String", # Value that marks the end of the range. "field": "A String", # The field that contains values which monotonically increases over time (e.g. a timestamp column). "start": "A String", # Value that marks the start of the range. }, }, }, "dataProfileSpec": { # DataProfileScan related setting. # Output only. DataProfileScan related setting. }, "dataQualityResult": { # The output of a DataQualityScan. # Output only. The result of the data quality scan. "dimensions": [ # A list of results at the dimension level. { # DataQualityDimensionResult provides a more detailed, per-dimension view of the results. "passed": True or False, # Whether the dimension passed or failed. }, ], "passed": True or False, # Overall data quality result -- true if all rules passed. "rowCount": "A String", # The count of rows processed. "rules": [ # A list of all the rules in a job, and their results. { # DataQualityRuleResult provides a more detailed, per-rule view of the results. "evaluatedCount": "A String", # The number of rows a rule was evaluated against. This field is only valid for ColumnMap type rules.Evaluated count can be configured to either include all rows (default) - with null rows automatically failing rule evaluation, or exclude null rows from the evaluated_count, by setting ignore_nulls = true. "failingRowsQuery": "A String", # The query to find rows that did not pass this rule. Only applies to ColumnMap and RowCondition rules. "nullCount": "A String", # The number of rows with null values in the specified column. "passRatio": 3.14, # The ratio of passed_count / evaluated_count. This field is only valid for ColumnMap type rules. "passed": True or False, # Whether the rule passed or failed. "passedCount": "A String", # The number of rows which passed a rule evaluation. This field is only valid for ColumnMap type rules. "rule": { # A rule captures data quality intent about a data source. # The rule specified in the DataQualitySpec, as is. "column": "A String", # Optional. The unnested column which this rule is evaluated against. "dimension": "A String", # Required. The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY" "ignoreNull": True or False, # Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.Only applicable to ColumnMap rules. "nonNullExpectation": { # Evaluates whether each column value is null. # ColumnMap rule which evaluates whether each column value is null. }, "rangeExpectation": { # Evaluates whether each column value lies between a specified range. # ColumnMap rule which evaluates whether each column value lies between a specified range. "maxValue": "A String", # Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided. "minValue": "A String", # Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided. "strictMaxEnabled": True or False, # Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false. "strictMinEnabled": True or False, # Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false. }, "regexExpectation": { # Evaluates whether each column value matches a specified regex. # ColumnMap rule which evaluates whether each column value matches a specified regex. "regex": "A String", # A regular expression the column value is expected to match. }, "rowConditionExpectation": { # Evaluates whether each row passes the specified condition.The SQL expression needs to use BigQuery standard SQL syntax and should produce a boolean value per row as the result.Example: col1 >= 0 AND col2 < 10 # Table rule which evaluates whether each row passes the specified condition. "sqlExpression": "A String", # The SQL expression. }, "setExpectation": { # Evaluates whether each column value is contained by a specified set. # ColumnMap rule which evaluates whether each column value is contained by a specified set. "values": [ # Expected values for the column value. "A String", ], }, "statisticRangeExpectation": { # Evaluates whether the column aggregate statistic lies between a specified range. # ColumnAggregate rule which evaluates whether the column aggregate statistic lies between a specified range. "maxValue": "A String", # The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided. "minValue": "A String", # The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided. "statistic": "A String", "strictMaxEnabled": True or False, # Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false. "strictMinEnabled": True or False, # Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false. }, "tableConditionExpectation": { # Evaluates whether the provided expression is true.The SQL expression needs to use BigQuery standard SQL syntax and should produce a scalar boolean result.Example: MIN(col1) >= 0 # Table rule which evaluates whether the provided expression is true. "sqlExpression": "A String", # The SQL expression. }, "threshold": 3.14, # Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0). "uniquenessExpectation": { # Evaluates whether the column has duplicates. # ColumnAggregate rule which evaluates whether the column has duplicates. }, }, }, ], "scannedData": { # The data scanned during processing (e.g. in incremental DataScan) # The data scanned for this result. "incrementalField": { # A data range denoted by a pair of start/end values of a field. # The range denoted by values of an incremental field "end": "A String", # Value that marks the end of the range. "field": "A String", # The field that contains values which monotonically increases over time (e.g. a timestamp column). "start": "A String", # Value that marks the start of the range. }, }, }, "dataQualitySpec": { # DataQualityScan related setting. # Output only. DataQualityScan related setting. "rules": [ # The list of rules to evaluate against a data source. At least one rule is required. { # A rule captures data quality intent about a data source. "column": "A String", # Optional. The unnested column which this rule is evaluated against. "dimension": "A String", # Required. The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY" "ignoreNull": True or False, # Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.Only applicable to ColumnMap rules. "nonNullExpectation": { # Evaluates whether each column value is null. # ColumnMap rule which evaluates whether each column value is null. }, "rangeExpectation": { # Evaluates whether each column value lies between a specified range. # ColumnMap rule which evaluates whether each column value lies between a specified range. "maxValue": "A String", # Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided. "minValue": "A String", # Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided. "strictMaxEnabled": True or False, # Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false. "strictMinEnabled": True or False, # Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false. }, "regexExpectation": { # Evaluates whether each column value matches a specified regex. # ColumnMap rule which evaluates whether each column value matches a specified regex. "regex": "A String", # A regular expression the column value is expected to match. }, "rowConditionExpectation": { # Evaluates whether each row passes the specified condition.The SQL expression needs to use BigQuery standard SQL syntax and should produce a boolean value per row as the result.Example: col1 >= 0 AND col2 < 10 # Table rule which evaluates whether each row passes the specified condition. "sqlExpression": "A String", # The SQL expression. }, "setExpectation": { # Evaluates whether each column value is contained by a specified set. # ColumnMap rule which evaluates whether each column value is contained by a specified set. "values": [ # Expected values for the column value. "A String", ], }, "statisticRangeExpectation": { # Evaluates whether the column aggregate statistic lies between a specified range. # ColumnAggregate rule which evaluates whether the column aggregate statistic lies between a specified range. "maxValue": "A String", # The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided. "minValue": "A String", # The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided. "statistic": "A String", "strictMaxEnabled": True or False, # Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false. "strictMinEnabled": True or False, # Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false. }, "tableConditionExpectation": { # Evaluates whether the provided expression is true.The SQL expression needs to use BigQuery standard SQL syntax and should produce a scalar boolean result.Example: MIN(col1) >= 0 # Table rule which evaluates whether the provided expression is true. "sqlExpression": "A String", # The SQL expression. }, "threshold": 3.14, # Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0). "uniquenessExpectation": { # Evaluates whether the column has duplicates. # ColumnAggregate rule which evaluates whether the column has duplicates. }, }, ], }, "endTime": "A String", # Output only. The time when the DataScanJob ended. "message": "A String", # Output only. Additional information about the current state. "name": "A String", # Output only. The relative resource name of the DataScanJob, of the form: projects/{project}/locations/{location_id}/dataScans/{datascan_id}/jobs/{job_id}, where project refers to a project_id or project_number and location_id refers to a GCP region. "startTime": "A String", # Output only. The time when the DataScanJob was started. "state": "A String", # Output only. Execution state for the DataScanJob. "type": "A String", # Output only. The type of the parent DataScan. "uid": "A String", # Output only. System generated globally unique ID for the DataScanJob. }
list(parent, pageSize=None, pageToken=None, x__xgafv=None)
Lists DataScanJobs under the given DataScan. Args: parent: string, Required. The resource name of the parent environment: projects/{project}/locations/{location_id}/dataScans/{data_scan_id} where project refers to a project_id or project_number and location_id refers to a GCP region. (required) pageSize: integer, Optional. Maximum number of DataScanJobs to return. The service may return fewer than this value. If unspecified, at most 10 DataScanJobs will be returned. The maximum value is 1000; values above 1000 will be coerced to 1000. pageToken: string, Optional. Page token received from a previous ListDataScanJobs call. Provide this to retrieve the subsequent page. When paginating, all other parameters provided to ListDataScanJobs must match the call that provided the page token. x__xgafv: string, V1 error format. Allowed values 1 - v1 error format 2 - v2 error format Returns: An object of the form: { # List DataScanJobs response. "dataScanJobs": [ # DataScanJobs (BASIC view only) under a given dataScan. { # A DataScanJob represents an instance of DataScan execution. "dataProfileResult": { # DataProfileResult defines the output of DataProfileScan. Each field of the table will have field type specific profile result. # Output only. The result of the data profile scan. "profile": { # Contains name, type, mode and field type specific profile information. # The profile information per field. "fields": [ # List of fields with structural and profile information for each field. { # A field within a table. "mode": "A String", # The mode of the field. Possible values include: REQUIRED, if it is a required field. NULLABLE, if it is an optional field. REPEATED, if it is a repeated field. "name": "A String", # The name of the field. "profile": { # The profile information for each field type. # Profile information for the corresponding field. "distinctRatio": 3.14, # Ratio of rows with distinct values against total scanned rows. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode. "doubleProfile": { # The profile information for a double type field. # Double type field information. "average": 3.14, # Average of non-null values in the scanned data. NaN, if the field has a NaN. "max": 3.14, # Maximum of non-null values in the scanned data. NaN, if the field has a NaN. "min": 3.14, # Minimum of non-null values in the scanned data. NaN, if the field has a NaN. "quartiles": [ # A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3. 3.14, ], "standardDeviation": 3.14, # Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN. }, "integerProfile": { # The profile information for an integer type field. # Integer type field information. "average": 3.14, # Average of non-null values in the scanned data. NaN, if the field has a NaN. "max": "A String", # Maximum of non-null values in the scanned data. NaN, if the field has a NaN. "min": "A String", # Minimum of non-null values in the scanned data. NaN, if the field has a NaN. "quartiles": [ # A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3. "A String", ], "standardDeviation": 3.14, # Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN. }, "nullRatio": 3.14, # Ratio of rows with null value against total scanned rows. "stringProfile": { # The profile information for a string type field. # String type field information. "averageLength": 3.14, # Average length of non-null values in the scanned data. "maxLength": "A String", # Maximum length of non-null values in the scanned data. "minLength": "A String", # Minimum length of non-null values in the scanned data. }, "topNValues": [ # The list of top N non-null values and number of times they occur in the scanned data. N is 10 or equal to the number of distinct values in the field, whichever is smaller. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode. { # Top N non-null values in the scanned data. "count": "A String", # Count of the corresponding value in the scanned data. "value": "A String", # String value of a top N non-null value. }, ], }, "type": "A String", # The field data type. Possible values include: STRING BYTE INT64 INT32 INT16 DOUBLE FLOAT DECIMAL BOOLEAN BINARY TIMESTAMP DATE TIME NULL RECORD }, ], }, "rowCount": "A String", # The count of rows scanned. "scannedData": { # The data scanned during processing (e.g. in incremental DataScan) # The data scanned for this result. "incrementalField": { # A data range denoted by a pair of start/end values of a field. # The range denoted by values of an incremental field "end": "A String", # Value that marks the end of the range. "field": "A String", # The field that contains values which monotonically increases over time (e.g. a timestamp column). "start": "A String", # Value that marks the start of the range. }, }, }, "dataProfileSpec": { # DataProfileScan related setting. # Output only. DataProfileScan related setting. }, "dataQualityResult": { # The output of a DataQualityScan. # Output only. The result of the data quality scan. "dimensions": [ # A list of results at the dimension level. { # DataQualityDimensionResult provides a more detailed, per-dimension view of the results. "passed": True or False, # Whether the dimension passed or failed. }, ], "passed": True or False, # Overall data quality result -- true if all rules passed. "rowCount": "A String", # The count of rows processed. "rules": [ # A list of all the rules in a job, and their results. { # DataQualityRuleResult provides a more detailed, per-rule view of the results. "evaluatedCount": "A String", # The number of rows a rule was evaluated against. This field is only valid for ColumnMap type rules.Evaluated count can be configured to either include all rows (default) - with null rows automatically failing rule evaluation, or exclude null rows from the evaluated_count, by setting ignore_nulls = true. "failingRowsQuery": "A String", # The query to find rows that did not pass this rule. Only applies to ColumnMap and RowCondition rules. "nullCount": "A String", # The number of rows with null values in the specified column. "passRatio": 3.14, # The ratio of passed_count / evaluated_count. This field is only valid for ColumnMap type rules. "passed": True or False, # Whether the rule passed or failed. "passedCount": "A String", # The number of rows which passed a rule evaluation. This field is only valid for ColumnMap type rules. "rule": { # A rule captures data quality intent about a data source. # The rule specified in the DataQualitySpec, as is. "column": "A String", # Optional. The unnested column which this rule is evaluated against. "dimension": "A String", # Required. The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY" "ignoreNull": True or False, # Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.Only applicable to ColumnMap rules. "nonNullExpectation": { # Evaluates whether each column value is null. # ColumnMap rule which evaluates whether each column value is null. }, "rangeExpectation": { # Evaluates whether each column value lies between a specified range. # ColumnMap rule which evaluates whether each column value lies between a specified range. "maxValue": "A String", # Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided. "minValue": "A String", # Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided. "strictMaxEnabled": True or False, # Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false. "strictMinEnabled": True or False, # Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false. }, "regexExpectation": { # Evaluates whether each column value matches a specified regex. # ColumnMap rule which evaluates whether each column value matches a specified regex. "regex": "A String", # A regular expression the column value is expected to match. }, "rowConditionExpectation": { # Evaluates whether each row passes the specified condition.The SQL expression needs to use BigQuery standard SQL syntax and should produce a boolean value per row as the result.Example: col1 >= 0 AND col2 < 10 # Table rule which evaluates whether each row passes the specified condition. "sqlExpression": "A String", # The SQL expression. }, "setExpectation": { # Evaluates whether each column value is contained by a specified set. # ColumnMap rule which evaluates whether each column value is contained by a specified set. "values": [ # Expected values for the column value. "A String", ], }, "statisticRangeExpectation": { # Evaluates whether the column aggregate statistic lies between a specified range. # ColumnAggregate rule which evaluates whether the column aggregate statistic lies between a specified range. "maxValue": "A String", # The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided. "minValue": "A String", # The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided. "statistic": "A String", "strictMaxEnabled": True or False, # Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false. "strictMinEnabled": True or False, # Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false. }, "tableConditionExpectation": { # Evaluates whether the provided expression is true.The SQL expression needs to use BigQuery standard SQL syntax and should produce a scalar boolean result.Example: MIN(col1) >= 0 # Table rule which evaluates whether the provided expression is true. "sqlExpression": "A String", # The SQL expression. }, "threshold": 3.14, # Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0). "uniquenessExpectation": { # Evaluates whether the column has duplicates. # ColumnAggregate rule which evaluates whether the column has duplicates. }, }, }, ], "scannedData": { # The data scanned during processing (e.g. in incremental DataScan) # The data scanned for this result. "incrementalField": { # A data range denoted by a pair of start/end values of a field. # The range denoted by values of an incremental field "end": "A String", # Value that marks the end of the range. "field": "A String", # The field that contains values which monotonically increases over time (e.g. a timestamp column). "start": "A String", # Value that marks the start of the range. }, }, }, "dataQualitySpec": { # DataQualityScan related setting. # Output only. DataQualityScan related setting. "rules": [ # The list of rules to evaluate against a data source. At least one rule is required. { # A rule captures data quality intent about a data source. "column": "A String", # Optional. The unnested column which this rule is evaluated against. "dimension": "A String", # Required. The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY" "ignoreNull": True or False, # Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.Only applicable to ColumnMap rules. "nonNullExpectation": { # Evaluates whether each column value is null. # ColumnMap rule which evaluates whether each column value is null. }, "rangeExpectation": { # Evaluates whether each column value lies between a specified range. # ColumnMap rule which evaluates whether each column value lies between a specified range. "maxValue": "A String", # Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided. "minValue": "A String", # Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided. "strictMaxEnabled": True or False, # Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false. "strictMinEnabled": True or False, # Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false. }, "regexExpectation": { # Evaluates whether each column value matches a specified regex. # ColumnMap rule which evaluates whether each column value matches a specified regex. "regex": "A String", # A regular expression the column value is expected to match. }, "rowConditionExpectation": { # Evaluates whether each row passes the specified condition.The SQL expression needs to use BigQuery standard SQL syntax and should produce a boolean value per row as the result.Example: col1 >= 0 AND col2 < 10 # Table rule which evaluates whether each row passes the specified condition. "sqlExpression": "A String", # The SQL expression. }, "setExpectation": { # Evaluates whether each column value is contained by a specified set. # ColumnMap rule which evaluates whether each column value is contained by a specified set. "values": [ # Expected values for the column value. "A String", ], }, "statisticRangeExpectation": { # Evaluates whether the column aggregate statistic lies between a specified range. # ColumnAggregate rule which evaluates whether the column aggregate statistic lies between a specified range. "maxValue": "A String", # The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided. "minValue": "A String", # The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided. "statistic": "A String", "strictMaxEnabled": True or False, # Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false. "strictMinEnabled": True or False, # Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false. }, "tableConditionExpectation": { # Evaluates whether the provided expression is true.The SQL expression needs to use BigQuery standard SQL syntax and should produce a scalar boolean result.Example: MIN(col1) >= 0 # Table rule which evaluates whether the provided expression is true. "sqlExpression": "A String", # The SQL expression. }, "threshold": 3.14, # Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0). "uniquenessExpectation": { # Evaluates whether the column has duplicates. # ColumnAggregate rule which evaluates whether the column has duplicates. }, }, ], }, "endTime": "A String", # Output only. The time when the DataScanJob ended. "message": "A String", # Output only. Additional information about the current state. "name": "A String", # Output only. The relative resource name of the DataScanJob, of the form: projects/{project}/locations/{location_id}/dataScans/{datascan_id}/jobs/{job_id}, where project refers to a project_id or project_number and location_id refers to a GCP region. "startTime": "A String", # Output only. The time when the DataScanJob was started. "state": "A String", # Output only. Execution state for the DataScanJob. "type": "A String", # Output only. The type of the parent DataScan. "uid": "A String", # Output only. System generated globally unique ID for the DataScanJob. }, ], "nextPageToken": "A String", # Token to retrieve the next page of results, or empty if there are no more results in the list. }
list_next()
Retrieves the next page of results. Args: previous_request: The request for the previous page. (required) previous_response: The response from the request for the previous page. (required) Returns: A request object that you can call 'execute()' on to request the next page. Returns None if there are no more items in the collection.