From 83c0b6328b425f3deb2b8403c2903caa7c2e60fd Mon Sep 17 00:00:00 2001 From: Maxime Vono <28217226+mvono@users.noreply.github.com> Date: Fri, 17 Mar 2023 14:32:34 +0100 Subject: [PATCH 01/10] Update readme.md --- threat-model/readme.md | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-) diff --git a/threat-model/readme.md b/threat-model/readme.md index d629fe6..1f6384b 100644 --- a/threat-model/readme.md +++ b/threat-model/readme.md @@ -10,7 +10,7 @@ This document is currently a **draft**, submitted to the PATCG (and eventually t In this document, we outline the security considerations for proposed purpose-constrained APIs for the web platform (that is, within browsers, mobileOSs, and other user-agents) specified by the Private Advertising Technologies Working Group (PATWG). -Many of these proposals attempt to leverage the concept of _private computation_ as a component of these purpose-constrained APIs. An ideal private computation system would allow for the evaluation of a predefined function (i.e., the constrained purpose,) without revealing any new information to any party beyond the output of that predefined function. Private computation can be used to perform aggregation over inputs which, individually, must not be revealed. +Many of these proposals attempt to leverage the concept of _private computation_ as a component of these purpose-constrained APIs. An ideal private computation system would allow for the evaluation of a pre-defined function (i.e., the constrained purpose,) without revealing any new information to any party beyond the output of that predefined function. For instance, private computation could be leveraged to address two main advertising use-cases namely reporting and campaign optimization. Regarding reporting, private computation could be used to perform aggregation over inputs which, individually, must not be revealed. For campaign optimization, private computation could be used to train a machine learning algorithm over those private inputs without revealing them. Private computation can be instantiated using several technologies: @@ -24,8 +24,8 @@ For our threat model, we assume that an active attacker can control the network In the presence of this adversary, APIs should aim to achieve the following goals: -1. **Privacy**: Clients (and, more specifically, the vendors who distribute the clients) trust that (within the threat models), the API is purpose constrained. That is, all parties learn nothing beyond the intended result (e.g., a differentially private aggregation function computed over the client inputs.) -2. **Correctness:** Parties receiving the intended result trust that the protocol is executed correctly. Moreover, the amount that a result can be skewed by malicious input is bounded and known. +1. **Privacy**: Clients (and, more specifically, the vendors who distribute the clients) trust that (within the threat models), the API is purpose constrained. That is, all parties receive nothing beyond the intended result (e.g., a differentially private aggregation function computed over the client inputs.) +2. **Correctness:** Parties receiving the intended result have the guarantee that the protocol is executed correctly. Moreover, the amount that a result can be skewed by malicious input is bounded and known. Specific proposed purpose constrained APIs will provide their own analysis about how they achieve these properties. This threat model does not address aspects that are specific to specific private computation designs or configurations. Each private computation instantiation provides different options for defense against attacks. Web platform vendors can decide which configurations produce adequate safeguards for their APIs and users. This is explored further in [section 4. Private Computation Configurations](#4-private-computation-configurations). @@ -40,7 +40,9 @@ In this section, we enumerate the potential actors that may participate in a pro #### 1.1.1. Assets 1. Original inputs provided to client APIs. Clients expose these APIs to other actors below, which can modify the client’s assets, but should not reveal them. -2. Unencrypted input shares, for systems which rely on secret sharing among aggregators. +2. Unencrypted input shares, for systems which rely on secret sharing among + +ors. #### 1.1.2. Capabilities @@ -77,7 +79,7 @@ In this section, we enumerate the potential actors that may participate in a pro #### 1.2.3. Mitigations 1. Modification of client assets should be limited by the API interface to only allow for intended modifications. -2. Use of differential privacy (see [section 3. Aggregation and Anonymization](#3-Aggregation-and-Anonymization)) should be used to prevent +2. Use of differential privacy (see [section 3. Aggregation and Anonymization](#3-Aggregation-and-Anonymization)) should be used to prevent [This sentence is not finished] ### 1.3. Delegated Parties (Cross Site/App) @@ -169,14 +171,15 @@ An coordinator is type of helper party which participates in a helper party netw ### 1.7. Helper party collusion -If enough helper parties collude (beyond the proposal-specific subset which an attacker is assumed to control), then none of the properties of the system hold. Such scenarios are outside the threat mode. +If enough helper parties collude (beyond the proposal-specific subset which an attacker is assumed to control), then none of the properties of the system hold. Such scenarios are outside the threat model. However, we do assume that an attacker can always control at least one helper party. That is, there can be no perfectly trusted helper parties. +### 1.8 On Premise Solutions for Helper Parties ### 1.8 Cloud Providers for Helper Parties -Helper parties may run either on physical machines owned by directly by the aggregator or (more commonly) subcontract with a cloud provider. We assume that an attacker can control some subset of cloud providers. +Helper parties may run either on physical machines owned directly by the aggregator or (more commonly) subcontract with a cloud provider. We assume that an attacker can control some subset of cloud providers. #### 1.8.1 Assets From 4d80c1f5f699a000001757100382d23b661379f3 Mon Sep 17 00:00:00 2001 From: Maxime Vono <28217226+mvono@users.noreply.github.com> Date: Fri, 17 Mar 2023 14:42:17 +0100 Subject: [PATCH 02/10] Update readme.md --- threat-model/readme.md | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/threat-model/readme.md b/threat-model/readme.md index 1f6384b..5cdaea4 100644 --- a/threat-model/readme.md +++ b/threat-model/readme.md @@ -18,9 +18,9 @@ Private computation can be instantiated using several technologies: * A trusted execution environment (TEE) isolates computation and its state by using specialized hardware. * Fully homomorphic encryption (FHE) enables computation on the ciphertext of encrypted inputs. -Though the implementation details differ for each technology, ultimately they all rely on finding at least two entities - or _aggregators_ - that can be trusted not to conspire to reveal private inputs. The forms considered by existing attribution proposals are MPC and TEEs. +Though the implementation details differ for each technology, ultimately they all rely on finding at least two entities (also called helper parties) that can be trusted not to conspire to reveal private inputs. In the sequel, such entities will be referred to as _aggregators_ and _coordinators_ for MPC-based and TEE-based private computations, respectively. -For our threat model, we assume that an active attacker can control the network and has the ability to corrupt any number of clients, the parties who call the proposed APIs, and some subset of aggregators, when used. +For our threat model, we assume that an active attacker can control a network of helper parties and has the ability to corrupt any number of clients, the parties who call the proposed APIs, and some subset of aggregators or collectors, when used. In the presence of this adversary, APIs should aim to achieve the following goals: @@ -40,9 +40,7 @@ In this section, we enumerate the potential actors that may participate in a pro #### 1.1.1. Assets 1. Original inputs provided to client APIs. Clients expose these APIs to other actors below, which can modify the client’s assets, but should not reveal them. -2. Unencrypted input shares, for systems which rely on secret sharing among - -ors. +2. Unencrypted input shares, for systems which rely on secret sharing among aggregators. #### 1.1.2. Capabilities From 7eba935fc641355be2c656277f4f74fbb5e15d46 Mon Sep 17 00:00:00 2001 From: Maxime Vono <28217226+mvono@users.noreply.github.com> Date: Sat, 18 Mar 2023 13:52:28 +0100 Subject: [PATCH 03/10] Update threat-model/readme.md Co-authored-by: Charlie Harrison --- threat-model/readme.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/threat-model/readme.md b/threat-model/readme.md index 5cdaea4..c8e66f1 100644 --- a/threat-model/readme.md +++ b/threat-model/readme.md @@ -18,7 +18,7 @@ Private computation can be instantiated using several technologies: * A trusted execution environment (TEE) isolates computation and its state by using specialized hardware. * Fully homomorphic encryption (FHE) enables computation on the ciphertext of encrypted inputs. -Though the implementation details differ for each technology, ultimately they all rely on finding at least two entities (also called helper parties) that can be trusted not to conspire to reveal private inputs. In the sequel, such entities will be referred to as _aggregators_ and _coordinators_ for MPC-based and TEE-based private computations, respectively. +Though the implementation details differ for each technology, ultimately they all rely on finding at least two entities (also called helper parties) that can be trusted not to conspire to reveal private inputs. Such entities will be referred to as _aggregators_ and _coordinators_ for MPC-based and TEE-based private computations, respectively. For our threat model, we assume that an active attacker can control a network of helper parties and has the ability to corrupt any number of clients, the parties who call the proposed APIs, and some subset of aggregators or collectors, when used. From 9f10d2116a09b5f8b65458a704cc2a65f3d6b47a Mon Sep 17 00:00:00 2001 From: Maxime Vono <28217226+mvono@users.noreply.github.com> Date: Sat, 18 Mar 2023 13:52:39 +0100 Subject: [PATCH 04/10] Update threat-model/readme.md Co-authored-by: Charlie Harrison --- threat-model/readme.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/threat-model/readme.md b/threat-model/readme.md index c8e66f1..06899f5 100644 --- a/threat-model/readme.md +++ b/threat-model/readme.md @@ -20,7 +20,7 @@ Private computation can be instantiated using several technologies: Though the implementation details differ for each technology, ultimately they all rely on finding at least two entities (also called helper parties) that can be trusted not to conspire to reveal private inputs. Such entities will be referred to as _aggregators_ and _coordinators_ for MPC-based and TEE-based private computations, respectively. -For our threat model, we assume that an active attacker can control a network of helper parties and has the ability to corrupt any number of clients, the parties who call the proposed APIs, and some subset of aggregators or collectors, when used. +For our threat model, we assume that an active attacker can control the network and has the ability to corrupt any number of clients, the parties who call the proposed APIs, and some subset of aggregators or collectors, when used. In the presence of this adversary, APIs should aim to achieve the following goals: From 97074d62f6a6fc07273865d110464c43358764ff Mon Sep 17 00:00:00 2001 From: Maxime Vono <28217226+mvono@users.noreply.github.com> Date: Sat, 18 Mar 2023 13:56:35 +0100 Subject: [PATCH 05/10] Update readme.md --- threat-model/readme.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/threat-model/readme.md b/threat-model/readme.md index 06899f5..c2ddbc1 100644 --- a/threat-model/readme.md +++ b/threat-model/readme.md @@ -25,7 +25,7 @@ For our threat model, we assume that an active attacker can control the network In the presence of this adversary, APIs should aim to achieve the following goals: 1. **Privacy**: Clients (and, more specifically, the vendors who distribute the clients) trust that (within the threat models), the API is purpose constrained. That is, all parties receive nothing beyond the intended result (e.g., a differentially private aggregation function computed over the client inputs.) -2. **Correctness:** Parties receiving the intended result have the guarantee that the protocol is executed correctly. Moreover, the amount that a result can be skewed by malicious input is bounded and known. +2. **Correctness:** Parties receiving the intended result trust (within the threat models) that the protocol is executed correctly. Moreover, the amount that a result can be skewed by malicious input is bounded and known. Specific proposed purpose constrained APIs will provide their own analysis about how they achieve these properties. This threat model does not address aspects that are specific to specific private computation designs or configurations. Each private computation instantiation provides different options for defense against attacks. Web platform vendors can decide which configurations produce adequate safeguards for their APIs and users. This is explored further in [section 4. Private Computation Configurations](#4-private-computation-configurations). From 0f69ae23a683458a07f229fb3ff9c85459bd66cb Mon Sep 17 00:00:00 2001 From: Maxime Vono <28217226+mvono@users.noreply.github.com> Date: Mon, 20 Mar 2023 10:04:23 +0100 Subject: [PATCH 06/10] Update threat-model/readme.md Co-authored-by: Martin Thomson --- threat-model/readme.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/threat-model/readme.md b/threat-model/readme.md index c2ddbc1..2584ffe 100644 --- a/threat-model/readme.md +++ b/threat-model/readme.md @@ -10,7 +10,7 @@ This document is currently a **draft**, submitted to the PATCG (and eventually t In this document, we outline the security considerations for proposed purpose-constrained APIs for the web platform (that is, within browsers, mobileOSs, and other user-agents) specified by the Private Advertising Technologies Working Group (PATWG). -Many of these proposals attempt to leverage the concept of _private computation_ as a component of these purpose-constrained APIs. An ideal private computation system would allow for the evaluation of a pre-defined function (i.e., the constrained purpose,) without revealing any new information to any party beyond the output of that predefined function. For instance, private computation could be leveraged to address two main advertising use-cases namely reporting and campaign optimization. Regarding reporting, private computation could be used to perform aggregation over inputs which, individually, must not be revealed. For campaign optimization, private computation could be used to train a machine learning algorithm over those private inputs without revealing them. +Many of these proposals attempt to leverage the concept of _private computation_ as a component of these purpose-constrained APIs. An ideal private computation system would allow for the evaluation of a pre-defined function (i.e., the constrained purpose,) without revealing any new information to any party beyond the output of that predefined function. For instance, private computation could be leveraged to address two main advertising use-cases namely reporting and campaign optimization. Regarding reporting, private computation could be used to perform aggregation over inputs which, individually, must not be revealed. For campaign optimization, private computation could be part of a system that trains a machine learning algorithm, without allowing direct access to private information. Private computation can be instantiated using several technologies: From 468f15a6df90414cb5f924fa1a18b5578abc8b37 Mon Sep 17 00:00:00 2001 From: Maxime Vono <28217226+mvono@users.noreply.github.com> Date: Mon, 20 Mar 2023 10:05:01 +0100 Subject: [PATCH 07/10] Update threat-model/readme.md Co-authored-by: Martin Thomson --- threat-model/readme.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/threat-model/readme.md b/threat-model/readme.md index 2584ffe..12720e6 100644 --- a/threat-model/readme.md +++ b/threat-model/readme.md @@ -18,7 +18,7 @@ Private computation can be instantiated using several technologies: * A trusted execution environment (TEE) isolates computation and its state by using specialized hardware. * Fully homomorphic encryption (FHE) enables computation on the ciphertext of encrypted inputs. -Though the implementation details differ for each technology, ultimately they all rely on finding at least two entities (also called helper parties) that can be trusted not to conspire to reveal private inputs. Such entities will be referred to as _aggregators_ and _coordinators_ for MPC-based and TEE-based private computations, respectively. +Though the implementation details differ for each technology, ultimately they all rely on finding at least two entities that can be trusted not to conspire to reveal private inputs. Such entities will be referred to as _aggregators_ and _coordinators_ for MPC-based and TEE-based private computations, respectively. For our threat model, we assume that an active attacker can control the network and has the ability to corrupt any number of clients, the parties who call the proposed APIs, and some subset of aggregators or collectors, when used. From 5f1b3808105fb79ccebfa5477593a7fdbc177ba1 Mon Sep 17 00:00:00 2001 From: Maxime Vono <28217226+mvono@users.noreply.github.com> Date: Mon, 20 Mar 2023 10:05:16 +0100 Subject: [PATCH 08/10] Update threat-model/readme.md Co-authored-by: Martin Thomson --- threat-model/readme.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/threat-model/readme.md b/threat-model/readme.md index 12720e6..254e9a3 100644 --- a/threat-model/readme.md +++ b/threat-model/readme.md @@ -77,7 +77,7 @@ In this section, we enumerate the potential actors that may participate in a pro #### 1.2.3. Mitigations 1. Modification of client assets should be limited by the API interface to only allow for intended modifications. -2. Use of differential privacy (see [section 3. Aggregation and Anonymization](#3-Aggregation-and-Anonymization)) should be used to prevent [This sentence is not finished] +2. Use of differential privacy (see [Section 3. Aggregation and Anonymization](#3-Aggregation-and-Anonymization)) should be used to protect the contributions of individual users. ### 1.3. Delegated Parties (Cross Site/App) From fc078c3ffe20fa57bef51c6703271d1bf430693d Mon Sep 17 00:00:00 2001 From: Maxime Vono <28217226+mvono@users.noreply.github.com> Date: Fri, 14 Apr 2023 14:48:23 +0200 Subject: [PATCH 09/10] Update readme.md Hello, Please find attached some proposed updates regarding TEEs operators. Albeit mentioned in Section 1.8 that "helper parties may run either on physical machines owned by directly by the aggregator or (more commonly) subcontract with a cloud provider", replacing "cloud provider" with "TEE operator" in Section 1.9 could avoid some misunderstandings regarding where is running the TEE (i.e. within a cloud provider or on-premise). --- threat-model/readme.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/threat-model/readme.md b/threat-model/readme.md index 254e9a3..2a03767 100644 --- a/threat-model/readme.md +++ b/threat-model/readme.md @@ -177,7 +177,7 @@ However, we do assume that an attacker can always control at least one helper pa ### 1.8 Cloud Providers for Helper Parties -Helper parties may run either on physical machines owned directly by the aggregator or (more commonly) subcontract with a cloud provider. We assume that an attacker can control some subset of cloud providers. +Helper parties may run either on physical machines owned by directly by the aggregator or subcontract with a cloud provider. In the latter case, we assume that an attacker can control some subset of cloud providers. #### 1.8.1 Assets @@ -201,7 +201,7 @@ Helper parties may run either on physical machines owned directly by the aggrega ### 1.9 Operators of TEEs -As a piece of hardware, TEEs will have an operator with access to the machine. Most commonly, this will be a cloud provider. Depending on the specific hardware, there may be known vulnerabilities in which an attacker who only controls the operator can violate the obliviousness of client/user data. These attacks are outside this threat model, but are likely to inform specific web platform decisions about which instantiations of private computation to support. +As a piece of hardware, TEEs will have an operator with access to the machine. Depending on the specific hardware, there may be known vulnerabilities in which an attacker who only controls the operator can violate the obliviousness of client/user data. These attacks are outside this threat model, but are likely to inform specific web platform decisions about which instantiations of private computation to support. #### 1.9.1 Assets TODO @@ -225,12 +225,12 @@ TEEs can provide "attestation" which verifies that the TEE is running in the exp #### 1.9.2 Capabilities -1. If an attacker controls both the cloud provider and the TEE manufacturer, decrypt all data within the TEE. +1. If an attacker controls both the TEE operator and the TEE manufacturer, decrypt all data within the TEE. #### 1.9.2 Mitigations -1. Pick a configuration of TEE manufacturer and cloud operator where it can be assumed that an attacker cannot control both. +1. Pick a configuration of TEE manufacturer and TEE operator where it can be assumed that an attacker cannot control both. ### 1.11. Attacker on the network From 5295eabf2ba201d5fe0af514624d776b057b096a Mon Sep 17 00:00:00 2001 From: Michael Kleber Date: Mon, 30 Oct 2023 14:20:25 -0400 Subject: [PATCH 10/10] Update threat-model/readme.md --- threat-model/readme.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/threat-model/readme.md b/threat-model/readme.md index 2a03767..4a0b312 100644 --- a/threat-model/readme.md +++ b/threat-model/readme.md @@ -201,7 +201,7 @@ Helper parties may run either on physical machines owned by directly by the aggr ### 1.9 Operators of TEEs -As a piece of hardware, TEEs will have an operator with access to the machine. Depending on the specific hardware, there may be known vulnerabilities in which an attacker who only controls the operator can violate the obliviousness of client/user data. These attacks are outside this threat model, but are likely to inform specific web platform decisions about which instantiations of private computation to support. +As a piece of hardware, TEEs will have an operator with access to the machine. (For example, this might be a cloud provider who offers a confidential computing product.) Depending on the specific hardware, there may be known vulnerabilities in which an attacker who only controls the operator can violate the obliviousness of client/user data. These attacks are outside this threat model, but are likely to inform specific web platform decisions about which instantiations of private computation to support. #### 1.9.1 Assets TODO