You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+10-11Lines changed: 10 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -43,7 +43,7 @@ The fixed structure must be technology agnostic. The first fields of teh fixed s
43
43
*`Status: [Option[String]]` this is an enum representing the status of this version of the Data Product. Allowed values are: `[Draft|Published|Retired]`. This is a metadata that communicates the overall status of the Data Product but is not reflected to the actual deployment status.
44
44
*`Maturity: [Option[String]]` this is an enum to let the consumer understand if it is a tactical solution or not. It is really useful during migration from Data Warehouse or Data Lake. Allowed values are: `[Tactical|Strategic]`.
45
45
*`Billing: [Option[Yaml]]` this is a free form key-value area where is possible to put information useful for resource tagging and billing.
46
-
*`Tags: [Array[Yaml]]`free tags at Data Product level (please refer to OpenMetadata https://docs.open-metadata.org/openmetadata/schemas/entities/tagcategory).
46
+
*`Tags: [Array[Yaml]]`Tag labels at DP level (please refer to OpenMetadata https://docs.open-metadata.org/metadata-standard/schemas/types/taglabel).
47
47
*`Specific: [Yaml]` this is a custom section where we can put all the information strictly related to a specific execution environment. It can also refer to an additional file. At this level we also embed all the information to provision the general infrastructure (resource groups, networking, etc) needed for a specific Data Product. For example if a company decides to create a ResourceGroup for each data product and have a subscription reference for each domain and environment, it will be specified at this level. Also it is reccommended to put general security here, Azure Policy or IAM policies, VPC/Vnet, Subnet. This will be filled merging data defined at common level with values defined specifically for the selected environment.
48
48
49
49
The **unique identifier** of a Data Product is the concatenation of Domain, Name and Version. So we will refer to the `DP_UK` as a URN which ends in the following way: `$DPDomain:$DPName:$DPMajorVersion`.
@@ -75,23 +75,22 @@ Constraints:
75
75
*`ProcessDescription: [Option[String]]` what is the underlying process that contributes to generate the data exposed by this output port.
76
76
*`DataContract: [Yaml]`: In case something is going to change in this section, it represents a breaking change because the producer is breaking the contract, this will require to create a new version of the data product to keep backward compatibility
77
77
*`Schema: [Array[Yaml]]` when it comes to describe a schema we propose to leverage OpenMetadata specification: Ref https://docs.open-metadata.org/metadata-standard/schemas/entities/table#column. Each column can have a tag array and you can choose between simples LabelTags, ClassificationTags or DescriptiveTags. Here an example of classification Tag https://github.com/open-metadata/OpenMetadata/blob/main/catalog-rest-service/src/main/resources/json/data/tags/piiTags.json.
78
-
*`SLA: [Yaml]` Service Level Agreement, describe the quality of data delivery and the outpu tport in general. It represents the producer's overall promise to the consumers
78
+
*`SLA: [Yaml]` Service Level Agreement, describe the quality of data delivery and the output port in general. It represents the producer's overall promise to the consumers.
79
79
*`IntervalOfChange: [Option[String]]` how often changes in the data are reflected.
80
80
*`Timeliness: [Option[String]]` the skew between the time that a business fact occuts and when it becomes visibile in the data.
81
81
*`UpTime: [Option[String]]` the percentage of port availability.
82
-
*`TermsAndConditions: [Option[String]]` If the data is usable only in specific environments
82
+
*`TermsAndConditions: [Option[String]]` If the data is usable only in specific environments.
83
83
*`Endpoint: [Option[URL]]` this is the API endpoint that self-describe the output port and provide insightful information at runtime about the physical location of the data, the protocol must be used, etc.
84
-
*`DataSharingAgreement: [Yaml]` This part is coveringusage, privacy, purpose, limitations and is indipendent by the data contract
85
-
*`Purpose: [Option[String]]` what is the goal of this data set
84
+
*`DataSharingAgreement: [Yaml]` This part is covering usage, privacy, purpose, limitations and is indipendent by the data contract.
85
+
*`Purpose: [Option[String]]` what is the goal of this data set.
86
86
*`Billing: [Option[String]]` how a consumer will be charged back when it consumes this output port.
87
87
*`Security: [Option[String]]` additional information related to security aspects, like restrictions, maskings, sensibile information and privacy.
88
-
*`IntendedUsage: [Option[String]]` any other information needed by the consumer in order to effectively consume the data, it could be related to technical stuff ( ex. Extract no more than one year of data for good performances ) or to business domains ( Ex. this data is only useful in the marketing domains )
88
+
*`IntendedUsage: [Option[String]]` any other information needed by the consumer in order to effectively consume the data, it could be related to technical stuff (e.g. extract no more than one year of data for good performances ) or to business domains (e.g. this data is only useful in the marketing domains).
89
89
*`Limitations: [Option[String]]` If any limitation is present it must be made super clear to the consumers.
90
-
*`LifeCycle: [Option[String]]` Describe how the data will be historicized and how and when it will be deleted
90
+
*`LifeCycle: [Option[String]]` Describe how the data will be historicized and how and when it will be deleted.
91
91
*`Confidentiality: [Option[String]]` Describe what a consumer should do to keep the information confidential, how to process and store it. Permission to share or report it.
92
-
*`Tags: [Array[Yaml]]`free tags at OutputPort level, here we can have security classification for example (please refer to OpenMetadata https://docs.open-metadata.org/openmetadata/schemas/entities/tagcategory)
92
+
*`Tags: [Array[Yaml]]`Tag labels at OutputPort level, here we can have security classification for example (please refer to OpenMetadata https://docs.open-metadata.org/metadata-standard/schemas/types/taglabel).
93
93
*`SampleData: [Option[Yaml]]` provides a sample data of your Output Port. See OpenMetadata specification: https://docs.open-metadata.org/openmetadata/schemas/entities/table#tabledata
94
-
95
94
*`SemanticLinking: [Option[Yaml]]` here we can express semantic relationships between this output port and other outputports (also coming from other domains and data products). For example we could say that column "customerId" of our SQL Output Port references the column "id" of the SQL Output Port of the "Customer" Data Product.
96
95
*`Specific: [Yaml]` this is a custom section where we must put all the information strictly related to a specific technology or dependent from a standard/policy defined in the federated governance.
97
96
@@ -115,7 +114,7 @@ Constraints:
115
114
*`Technology: [Option[String]]` represents which technology is used to define the workload, like: Spark, Flink, pySpark, etc. The underlying technology is useful to understand better how the workload process data.
116
115
*`WorkloadType: [Option[String]]` explains what type of workload is: Ingestion ETL, Streaming, Internal Process, etc.
117
116
*`ConnectionType: [Option[String]]` an enum with allowed values: `[HouseKeeping|DataPipeline]`; `Housekeeping` is for all the workloads that are acting on internal data without any external dependency. `DataPipeline` instead is for workloads that are reading from outputport of other DP or external systems.
118
-
*`Tags: [Array[Yaml]]`free tags at Workload level ( please refer to OpenMetadata https://docs.open-metadata.org/openmetadata/schemas/entities/tagcategory )
117
+
*`Tags: [Array[Yaml]]`Tag labels at Workload level ( please refer to OpenMetadata https://docs.open-metadata.org/metadata-standard/schemas/types/taglabel).
119
118
*`ReadsFrom: [Array[String]]` This is filled only for `DataPipeline` workloads and it represents the list of Output Ports or external systems that the workload uses as input. Output Ports are identified with `DP_UK:$OutputPortName`, while external systems will be defined by a URN in the form `urn:dmb:ex:$SystemName`. This filed can be elaborated more in the future and create a more semantic struct.
120
119
Constraints:
121
120
* This array will only contain Output Port IDs and/or external systems identifiers.
@@ -138,7 +137,7 @@ Constraints:
138
137
*`Platform: [Option[String]]` represents the vendor: Azure, GCP, AWS, CDP on AWS, etc. It is a free field but it is useful to understand better the platform where the component will be running.
139
138
*`Technology: [Option[String]]` represents which technology is used to define the storage area, like: S3, Kafka, Athena, etc. The underlying technology is useful to understand better how the data is internally stored.
140
139
*`StorageType: [Option[String]]` the specific type of storage: Files, SQL, Events, etc.
141
-
*`Tags: [Array[Yaml]]`free tags at Storage area level ( please refer to OpenMetadata https://docs.open-metadata.org/openmetadata/schemas/entities/tagcategory )
140
+
*`Tags: [Array[Yaml]]`Tag labels at Storage area level ( please refer to OpenMetadata https://docs.open-metadata.org/metadata-standard/schemas/types/taglabel).
142
141
*`Specific: [Yaml]` this is a custom section where we can put all the information strictly related to a specific technology or dependent from a standard/policy defined in the federated governance.
0 commit comments