Cloud Dataplex API . projects . locations . dataScans . jobs

Instance Methods

close()

Close httplib2 connections.

get(name, view=None, x__xgafv=None)

Get DataScanJob resource.

list(parent, pageSize=None, pageToken=None, x__xgafv=None)

Lists DataScanJobs under the given dataScan.

list_next()

Retrieves the next page of results.

Method Details

close()
Close httplib2 connections.
get(name, view=None, x__xgafv=None)
Get DataScanJob resource.

Args:
  name: string, Required. The resource name of the DataScanJob: projects/{project}/locations/{location_id}/dataScans/{data_scan_id}/dataScanJobs/{data_scan_job_id} where {project} refers to a project_id or project_number and location_id refers to a GCP region. (required)
  view: string, Optional. Used to select the subset of DataScan information to return. Defaults to BASIC.
    Allowed values
      DATA_SCAN_JOB_VIEW_UNSPECIFIED - The API will default to the BASIC view.
      BASIC - Basic view that does not include spec and result.
      FULL - Include everything.
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # A DataScanJob represents an instance of a data scan.
  "dataProfileResult": { # DataProfileResult defines the output of DataProfileScan. Each field of the table will have field type specific profile result. # Output only. The result of the data profile scan.
    "profile": { # Profile information describing the structure and layout of the data and contains the profile info. # This represents the profile information per field.
      "fields": [ # The sequence of fields describing data in table entities.
        { # Represents a column field within a table schema.
          "mode": "A String", # The mode of the field. Its value will be: REQUIRED, if it is a required field. NULLABLE, if it is an optional field. REPEATED, if it is a repeated field.
          "name": "A String", # The name of the field.
          "profile": { # ProfileInfo defines the profile information for each schema field type. # The profile information for the corresponding field.
            "distinctRatio": 3.14, # The ratio of rows that are distinct against the rows in the sampled data.
            "doubleProfile": { # DoubleFieldInfo defines output for any double type field. # The corresponding double field profile.
              "average": 3.14, # The average of non-null values of double field in the sampled data. Return NaN, if the field has a NaN. Optional if zero non-null rows.
              "max": 3.14, # The maximum value of a double field in the sampled data. Return NaN, if the field has a NaN. Optional if zero non-null rows.
              "min": 3.14, # The minimum value of a double field in the sampled data. Return NaN, if the field has a NaN. Optional if zero non-null rows.
              "quartiles": [ # A quartile divide the numebr of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. So, here the quartiles is provided as an ordered list of quartile values, occurring in order Q1, median, Q3.
                3.14,
              ],
              "standardDeviation": 3.14, # The standard deviation of non-null of double field in the sampled data. Return NaN, if the field has a NaN. Optional if zero non-null rows.
            },
            "integerProfile": { # IntegerFieldInfo defines output for any integer type field. # The corresponding integer field profile.
              "average": 3.14, # The average of non-null values of integer field in the sampled data. Return NaN, if the field has a NaN. Optional if zero non-null rows.
              "max": "A String", # The maximum value of an integer field in the sampled data. Return NaN, if the field has a NaN. Optional if zero non-null rows.
              "min": "A String", # The minimum value of an integer field in the sampled data. Return NaN, if the field has a NaN. Optional if zero non-null rows.
              "quartiles": [ # A quartile divide the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. So, here the quartiles is provided as an ordered list of quartile values, occurring in order Q1, median, Q3.
                "A String",
              ],
              "standardDeviation": 3.14, # The standard deviation of non-null of integer field in the sampled data. Return NaN, if the field has a NaN. Optional if zero non-null rows.
            },
            "nullRatio": 3.14, # The ratio of null rows against the rows in the sampled data.
            "stringProfile": { # StringFieldInfo defines output info for any string type field. # The corresponding string field profile.
              "averageLength": 3.14, # The average length of a string field in the sampled data. Optional if zero non-null rows.
              "maxLength": "A String", # The maximum length of a string field in the sampled data. Optional if zero non-null rows.
              "minLength": "A String", # The minimum length of the string field in the sampled data. Optional if zero non-null rows.
            },
            "topNValues": [ # The array of top N values of the field in the sampled data. Currently N is set as 10 or equal to distinct values in the field, whichever is smaller. This will be optional for complex non-groupable data-types such as JSON, ARRAY, JSON, STRUCT.
              { # The TopNValue defines the structure of output of top N values of a field.
                "count": "A String", # The frequency count of the corresponding value in the field.
                "value": "A String", # The value is the string value of the actual value from the field.
              },
            ],
          },
          "type": "A String", # The field data type. Possible values include: STRING BYTE INT64 INT32 INT16 DOUBLE FLOAT DECIMAL BOOLEAN BINARY TIMESTAMP DATE TIME NULL RECORD
        },
      ],
    },
    "rowCount": "A String", # The count of all rows in the sampled data. Return 0, if zero rows.
    "scannedData": { # The data scanned during processing (e.g. in incremental DataScan) # The data scanned for this profile.
      "incrementalField": { # A data range denoted by a pair of start/end values of a field. # The range denoted by values of an incremental field
        "end": "A String", # Value that marks the end of the range
        "field": "A String", # The field that contains values which monotonically increases over time (e.g. timestamp).
        "start": "A String", # Value that marks the start of the range
      },
    },
  },
  "dataProfileSpec": { # DataProfileScan related setting. # Output only. DataProfileScan related setting.
  },
  "dataQualityResult": { # The output of a DataQualityScan. # Output only. The result of the data quality scan.
    "dimensions": [ # A list of results at the dimension-level.
      { # DataQualityDimensionResult provides a more detailed, per-dimension level view of the results.
        "passed": True or False, # Whether the dimension passed or failed.
      },
    ],
    "passed": True or False, # Overall data quality result -- true if all rules passed.
    "rowCount": "A String", # The count of rows processed.
    "rules": [ # A list of all the rules in a job, and their results.
      { # DataQualityRuleResult provides a more detailed, per-rule level view of the results.
        "evaluatedCount": "A String", # The number of rows a rule was evaluated against. This field is only valid for ColumnMap type rules. Evaluated count can be configured to either (1) include all rows (default) - with null rows automatically failing rule evaluation OR (2) exclude null rows from the evaluated_count, by setting ignore_nulls = true
        "failingRowsQuery": "A String", # The query to find rows that did not pass this rule. Only applies to ColumnMap and RowCondition rules.
        "nullCount": "A String", # The number of rows with null values in the specified column.
        "passRatio": 3.14, # The ratio of passed_count / evaluated_count. This field is only valid for ColumnMap type rules.
        "passed": True or False, # Whether the rule passed or failed.
        "passedCount": "A String", # The number of rows which passed a rule evaluation. This field is only valid for ColumnMap type rules.
        "rule": { # A rule captures data quality intent about a data source. # The rule specified in the DataQualitySpec, as is.
          "column": "A String", # Optional. The unnested column which this rule is evaluated against.
          "dimension": "A String", # Required. The dimension a rule belongs to. Results are also aggregated at the dimension-level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
          "ignoreNull": True or False, # Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing. Only applicable to ColumnMap rules.
          "nonNullExpectation": { # Evaluates whether each column value is null. # ColumnMap rule which evaluates whether each column value is null.
          },
          "rangeExpectation": { # Evaluates whether each column value lies between a specified range. # ColumnMap rule which evaluates whether each column value lies between a specified range.
            "maxValue": "A String", # Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
            "minValue": "A String", # Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
            "strictMaxEnabled": True or False, # Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed. Only relevant if a max_value has been defined. Default = false.
            "strictMinEnabled": True or False, # Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed. Only relevant if a min_value has been defined. Default = false.
          },
          "regexExpectation": { # Evaluates whether each column value matches a specified regex. # ColumnMap rule which evaluates whether each column value matches a specified regex.
            "regex": "A String",
          },
          "rowConditionExpectation": { # Evaluates whether each row passes the specified condition. The SQL expression needs to use BigQuery standard SQL syntax and should produce a boolean per row as the result. Example: col1 >= 0 AND col2 < 10 # Table rule which evaluates whether each row passes the specified condition.
            "sqlExpression": "A String",
          },
          "setExpectation": { # Evaluates whether each column value is contained by a specified set. # ColumnMap rule which evaluates whether each column value is contained by a specified set.
            "values": [
              "A String",
            ],
          },
          "statisticRangeExpectation": { # Evaluates whether the column aggregate statistic lies between a specified range. # ColumnAggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
            "maxValue": "A String", # The maximum column statistic value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
            "minValue": "A String", # The minimum column statistic value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
            "statistic": "A String",
            "strictMaxEnabled": True or False, # Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed. Only relevant if a max_value has been defined. Default = false.
            "strictMinEnabled": True or False, # Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed. Only relevant if a min_value has been defined. Default = false.
          },
          "tableConditionExpectation": { # Evaluates whether the provided expression is true. The SQL expression needs to use BigQuery standard SQL syntax and should produce a scalar boolean result. Example: MIN(col1) >= 0 # Table rule which evaluates whether the provided expression is true.
            "sqlExpression": "A String",
          },
          "threshold": 3.14, # Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.00 indicates default value (i.e. 1.0)
          "uniquenessExpectation": { # Evaluates whether the column has duplicates. # ColumnAggregate rule which evaluates whether the column has duplicates.
          },
        },
      },
    ],
    "scannedData": { # The data scanned during processing (e.g. in incremental DataScan) # The data scanned for this result.
      "incrementalField": { # A data range denoted by a pair of start/end values of a field. # The range denoted by values of an incremental field
        "end": "A String", # Value that marks the end of the range
        "field": "A String", # The field that contains values which monotonically increases over time (e.g. timestamp).
        "start": "A String", # Value that marks the start of the range
      },
    },
  },
  "dataQualitySpec": { # DataQualityScan related setting. # Output only. DataQualityScan related setting.
    "rules": [ # The list of rules to evaluate against a data source. At least one rule is required.
      { # A rule captures data quality intent about a data source.
        "column": "A String", # Optional. The unnested column which this rule is evaluated against.
        "dimension": "A String", # Required. The dimension a rule belongs to. Results are also aggregated at the dimension-level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
        "ignoreNull": True or False, # Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing. Only applicable to ColumnMap rules.
        "nonNullExpectation": { # Evaluates whether each column value is null. # ColumnMap rule which evaluates whether each column value is null.
        },
        "rangeExpectation": { # Evaluates whether each column value lies between a specified range. # ColumnMap rule which evaluates whether each column value lies between a specified range.
          "maxValue": "A String", # Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
          "minValue": "A String", # Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
          "strictMaxEnabled": True or False, # Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed. Only relevant if a max_value has been defined. Default = false.
          "strictMinEnabled": True or False, # Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed. Only relevant if a min_value has been defined. Default = false.
        },
        "regexExpectation": { # Evaluates whether each column value matches a specified regex. # ColumnMap rule which evaluates whether each column value matches a specified regex.
          "regex": "A String",
        },
        "rowConditionExpectation": { # Evaluates whether each row passes the specified condition. The SQL expression needs to use BigQuery standard SQL syntax and should produce a boolean per row as the result. Example: col1 >= 0 AND col2 < 10 # Table rule which evaluates whether each row passes the specified condition.
          "sqlExpression": "A String",
        },
        "setExpectation": { # Evaluates whether each column value is contained by a specified set. # ColumnMap rule which evaluates whether each column value is contained by a specified set.
          "values": [
            "A String",
          ],
        },
        "statisticRangeExpectation": { # Evaluates whether the column aggregate statistic lies between a specified range. # ColumnAggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
          "maxValue": "A String", # The maximum column statistic value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
          "minValue": "A String", # The minimum column statistic value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
          "statistic": "A String",
          "strictMaxEnabled": True or False, # Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed. Only relevant if a max_value has been defined. Default = false.
          "strictMinEnabled": True or False, # Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed. Only relevant if a min_value has been defined. Default = false.
        },
        "tableConditionExpectation": { # Evaluates whether the provided expression is true. The SQL expression needs to use BigQuery standard SQL syntax and should produce a scalar boolean result. Example: MIN(col1) >= 0 # Table rule which evaluates whether the provided expression is true.
          "sqlExpression": "A String",
        },
        "threshold": 3.14, # Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.00 indicates default value (i.e. 1.0)
        "uniquenessExpectation": { # Evaluates whether the column has duplicates. # ColumnAggregate rule which evaluates whether the column has duplicates.
        },
      },
    ],
  },
  "endTime": "A String", # Output only. The time when the DataScanJob ended.
  "message": "A String", # Output only. Additional information about the current state.
  "name": "A String", # Output only. The relative resource name of the DataScanJob, of the form: projects/{project}/locations/{location_id}/dataScans/{datascan_id}/jobs/{job_id}. where {project} refers to a project_id or project_number and location_id refers to a GCP region.
  "startTime": "A String", # Output only. The time when the DataScanJob was started.
  "state": "A String", # Output only. Execution state for the DataScanJob.
  "type": "A String", # Output only. The type of the parent DataScan.
  "uid": "A String", # Output only. System generated globally unique ID for the DataScanJob.
}
list(parent, pageSize=None, pageToken=None, x__xgafv=None)
Lists DataScanJobs under the given dataScan.

Args:
  parent: string, Required. The resource name of the parent environment: projects/{project}/locations/{location_id}/dataScans/{data_scan_id} where {project} refers to a project_id or project_number and location_id refers to a GCP region. (required)
  pageSize: integer, Optional. Maximum number of DataScanJobs to return. The service may return fewer than this value. If unspecified, at most 10 DataScanJobs will be returned. The maximum value is 1000; values above 1000 will be coerced to 1000.
  pageToken: string, Optional. Page token received from a previous ListDataScanJobs call. Provide this to retrieve the subsequent page. When paginating, all other parameters provided to ListDataScanJobs must match the call that provided the page token.
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # List DataScanJobs response.
  "dataScanJobs": [ # DataScanJobs (metadata only) under a given dataScan.
    { # A DataScanJob represents an instance of a data scan.
      "dataProfileResult": { # DataProfileResult defines the output of DataProfileScan. Each field of the table will have field type specific profile result. # Output only. The result of the data profile scan.
        "profile": { # Profile information describing the structure and layout of the data and contains the profile info. # This represents the profile information per field.
          "fields": [ # The sequence of fields describing data in table entities.
            { # Represents a column field within a table schema.
              "mode": "A String", # The mode of the field. Its value will be: REQUIRED, if it is a required field. NULLABLE, if it is an optional field. REPEATED, if it is a repeated field.
              "name": "A String", # The name of the field.
              "profile": { # ProfileInfo defines the profile information for each schema field type. # The profile information for the corresponding field.
                "distinctRatio": 3.14, # The ratio of rows that are distinct against the rows in the sampled data.
                "doubleProfile": { # DoubleFieldInfo defines output for any double type field. # The corresponding double field profile.
                  "average": 3.14, # The average of non-null values of double field in the sampled data. Return NaN, if the field has a NaN. Optional if zero non-null rows.
                  "max": 3.14, # The maximum value of a double field in the sampled data. Return NaN, if the field has a NaN. Optional if zero non-null rows.
                  "min": 3.14, # The minimum value of a double field in the sampled data. Return NaN, if the field has a NaN. Optional if zero non-null rows.
                  "quartiles": [ # A quartile divide the numebr of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. So, here the quartiles is provided as an ordered list of quartile values, occurring in order Q1, median, Q3.
                    3.14,
                  ],
                  "standardDeviation": 3.14, # The standard deviation of non-null of double field in the sampled data. Return NaN, if the field has a NaN. Optional if zero non-null rows.
                },
                "integerProfile": { # IntegerFieldInfo defines output for any integer type field. # The corresponding integer field profile.
                  "average": 3.14, # The average of non-null values of integer field in the sampled data. Return NaN, if the field has a NaN. Optional if zero non-null rows.
                  "max": "A String", # The maximum value of an integer field in the sampled data. Return NaN, if the field has a NaN. Optional if zero non-null rows.
                  "min": "A String", # The minimum value of an integer field in the sampled data. Return NaN, if the field has a NaN. Optional if zero non-null rows.
                  "quartiles": [ # A quartile divide the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. So, here the quartiles is provided as an ordered list of quartile values, occurring in order Q1, median, Q3.
                    "A String",
                  ],
                  "standardDeviation": 3.14, # The standard deviation of non-null of integer field in the sampled data. Return NaN, if the field has a NaN. Optional if zero non-null rows.
                },
                "nullRatio": 3.14, # The ratio of null rows against the rows in the sampled data.
                "stringProfile": { # StringFieldInfo defines output info for any string type field. # The corresponding string field profile.
                  "averageLength": 3.14, # The average length of a string field in the sampled data. Optional if zero non-null rows.
                  "maxLength": "A String", # The maximum length of a string field in the sampled data. Optional if zero non-null rows.
                  "minLength": "A String", # The minimum length of the string field in the sampled data. Optional if zero non-null rows.
                },
                "topNValues": [ # The array of top N values of the field in the sampled data. Currently N is set as 10 or equal to distinct values in the field, whichever is smaller. This will be optional for complex non-groupable data-types such as JSON, ARRAY, JSON, STRUCT.
                  { # The TopNValue defines the structure of output of top N values of a field.
                    "count": "A String", # The frequency count of the corresponding value in the field.
                    "value": "A String", # The value is the string value of the actual value from the field.
                  },
                ],
              },
              "type": "A String", # The field data type. Possible values include: STRING BYTE INT64 INT32 INT16 DOUBLE FLOAT DECIMAL BOOLEAN BINARY TIMESTAMP DATE TIME NULL RECORD
            },
          ],
        },
        "rowCount": "A String", # The count of all rows in the sampled data. Return 0, if zero rows.
        "scannedData": { # The data scanned during processing (e.g. in incremental DataScan) # The data scanned for this profile.
          "incrementalField": { # A data range denoted by a pair of start/end values of a field. # The range denoted by values of an incremental field
            "end": "A String", # Value that marks the end of the range
            "field": "A String", # The field that contains values which monotonically increases over time (e.g. timestamp).
            "start": "A String", # Value that marks the start of the range
          },
        },
      },
      "dataProfileSpec": { # DataProfileScan related setting. # Output only. DataProfileScan related setting.
      },
      "dataQualityResult": { # The output of a DataQualityScan. # Output only. The result of the data quality scan.
        "dimensions": [ # A list of results at the dimension-level.
          { # DataQualityDimensionResult provides a more detailed, per-dimension level view of the results.
            "passed": True or False, # Whether the dimension passed or failed.
          },
        ],
        "passed": True or False, # Overall data quality result -- true if all rules passed.
        "rowCount": "A String", # The count of rows processed.
        "rules": [ # A list of all the rules in a job, and their results.
          { # DataQualityRuleResult provides a more detailed, per-rule level view of the results.
            "evaluatedCount": "A String", # The number of rows a rule was evaluated against. This field is only valid for ColumnMap type rules. Evaluated count can be configured to either (1) include all rows (default) - with null rows automatically failing rule evaluation OR (2) exclude null rows from the evaluated_count, by setting ignore_nulls = true
            "failingRowsQuery": "A String", # The query to find rows that did not pass this rule. Only applies to ColumnMap and RowCondition rules.
            "nullCount": "A String", # The number of rows with null values in the specified column.
            "passRatio": 3.14, # The ratio of passed_count / evaluated_count. This field is only valid for ColumnMap type rules.
            "passed": True or False, # Whether the rule passed or failed.
            "passedCount": "A String", # The number of rows which passed a rule evaluation. This field is only valid for ColumnMap type rules.
            "rule": { # A rule captures data quality intent about a data source. # The rule specified in the DataQualitySpec, as is.
              "column": "A String", # Optional. The unnested column which this rule is evaluated against.
              "dimension": "A String", # Required. The dimension a rule belongs to. Results are also aggregated at the dimension-level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
              "ignoreNull": True or False, # Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing. Only applicable to ColumnMap rules.
              "nonNullExpectation": { # Evaluates whether each column value is null. # ColumnMap rule which evaluates whether each column value is null.
              },
              "rangeExpectation": { # Evaluates whether each column value lies between a specified range. # ColumnMap rule which evaluates whether each column value lies between a specified range.
                "maxValue": "A String", # Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
                "minValue": "A String", # Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
                "strictMaxEnabled": True or False, # Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed. Only relevant if a max_value has been defined. Default = false.
                "strictMinEnabled": True or False, # Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed. Only relevant if a min_value has been defined. Default = false.
              },
              "regexExpectation": { # Evaluates whether each column value matches a specified regex. # ColumnMap rule which evaluates whether each column value matches a specified regex.
                "regex": "A String",
              },
              "rowConditionExpectation": { # Evaluates whether each row passes the specified condition. The SQL expression needs to use BigQuery standard SQL syntax and should produce a boolean per row as the result. Example: col1 >= 0 AND col2 < 10 # Table rule which evaluates whether each row passes the specified condition.
                "sqlExpression": "A String",
              },
              "setExpectation": { # Evaluates whether each column value is contained by a specified set. # ColumnMap rule which evaluates whether each column value is contained by a specified set.
                "values": [
                  "A String",
                ],
              },
              "statisticRangeExpectation": { # Evaluates whether the column aggregate statistic lies between a specified range. # ColumnAggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
                "maxValue": "A String", # The maximum column statistic value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
                "minValue": "A String", # The minimum column statistic value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
                "statistic": "A String",
                "strictMaxEnabled": True or False, # Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed. Only relevant if a max_value has been defined. Default = false.
                "strictMinEnabled": True or False, # Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed. Only relevant if a min_value has been defined. Default = false.
              },
              "tableConditionExpectation": { # Evaluates whether the provided expression is true. The SQL expression needs to use BigQuery standard SQL syntax and should produce a scalar boolean result. Example: MIN(col1) >= 0 # Table rule which evaluates whether the provided expression is true.
                "sqlExpression": "A String",
              },
              "threshold": 3.14, # Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.00 indicates default value (i.e. 1.0)
              "uniquenessExpectation": { # Evaluates whether the column has duplicates. # ColumnAggregate rule which evaluates whether the column has duplicates.
              },
            },
          },
        ],
        "scannedData": { # The data scanned during processing (e.g. in incremental DataScan) # The data scanned for this result.
          "incrementalField": { # A data range denoted by a pair of start/end values of a field. # The range denoted by values of an incremental field
            "end": "A String", # Value that marks the end of the range
            "field": "A String", # The field that contains values which monotonically increases over time (e.g. timestamp).
            "start": "A String", # Value that marks the start of the range
          },
        },
      },
      "dataQualitySpec": { # DataQualityScan related setting. # Output only. DataQualityScan related setting.
        "rules": [ # The list of rules to evaluate against a data source. At least one rule is required.
          { # A rule captures data quality intent about a data source.
            "column": "A String", # Optional. The unnested column which this rule is evaluated against.
            "dimension": "A String", # Required. The dimension a rule belongs to. Results are also aggregated at the dimension-level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
            "ignoreNull": True or False, # Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing. Only applicable to ColumnMap rules.
            "nonNullExpectation": { # Evaluates whether each column value is null. # ColumnMap rule which evaluates whether each column value is null.
            },
            "rangeExpectation": { # Evaluates whether each column value lies between a specified range. # ColumnMap rule which evaluates whether each column value lies between a specified range.
              "maxValue": "A String", # Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
              "minValue": "A String", # Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
              "strictMaxEnabled": True or False, # Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed. Only relevant if a max_value has been defined. Default = false.
              "strictMinEnabled": True or False, # Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed. Only relevant if a min_value has been defined. Default = false.
            },
            "regexExpectation": { # Evaluates whether each column value matches a specified regex. # ColumnMap rule which evaluates whether each column value matches a specified regex.
              "regex": "A String",
            },
            "rowConditionExpectation": { # Evaluates whether each row passes the specified condition. The SQL expression needs to use BigQuery standard SQL syntax and should produce a boolean per row as the result. Example: col1 >= 0 AND col2 < 10 # Table rule which evaluates whether each row passes the specified condition.
              "sqlExpression": "A String",
            },
            "setExpectation": { # Evaluates whether each column value is contained by a specified set. # ColumnMap rule which evaluates whether each column value is contained by a specified set.
              "values": [
                "A String",
              ],
            },
            "statisticRangeExpectation": { # Evaluates whether the column aggregate statistic lies between a specified range. # ColumnAggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
              "maxValue": "A String", # The maximum column statistic value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
              "minValue": "A String", # The minimum column statistic value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
              "statistic": "A String",
              "strictMaxEnabled": True or False, # Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed. Only relevant if a max_value has been defined. Default = false.
              "strictMinEnabled": True or False, # Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed. Only relevant if a min_value has been defined. Default = false.
            },
            "tableConditionExpectation": { # Evaluates whether the provided expression is true. The SQL expression needs to use BigQuery standard SQL syntax and should produce a scalar boolean result. Example: MIN(col1) >= 0 # Table rule which evaluates whether the provided expression is true.
              "sqlExpression": "A String",
            },
            "threshold": 3.14, # Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.00 indicates default value (i.e. 1.0)
            "uniquenessExpectation": { # Evaluates whether the column has duplicates. # ColumnAggregate rule which evaluates whether the column has duplicates.
            },
          },
        ],
      },
      "endTime": "A String", # Output only. The time when the DataScanJob ended.
      "message": "A String", # Output only. Additional information about the current state.
      "name": "A String", # Output only. The relative resource name of the DataScanJob, of the form: projects/{project}/locations/{location_id}/dataScans/{datascan_id}/jobs/{job_id}. where {project} refers to a project_id or project_number and location_id refers to a GCP region.
      "startTime": "A String", # Output only. The time when the DataScanJob was started.
      "state": "A String", # Output only. Execution state for the DataScanJob.
      "type": "A String", # Output only. The type of the parent DataScan.
      "uid": "A String", # Output only. System generated globally unique ID for the DataScanJob.
    },
  ],
  "nextPageToken": "A String", # Token to retrieve the next page of results, or empty if there are no more results in the list.
}
list_next()
Retrieves the next page of results.

        Args:
          previous_request: The request for the previous page. (required)
          previous_response: The response from the request for the previous page. (required)

        Returns:
          A request object that you can call 'execute()' on to request the next
          page. Returns None if there are no more items in the collection.