|
|
---
|
|
|
coding: utf-8
|
|
|
|
|
|
title: "Motivation and Requirements for Decentralized Usable Privacy"
|
|
|
abbrev: MEDUP Motivation and Requirements
|
|
|
docname: draft-symeonidis-medup-requirements-00
|
|
|
category: std
|
|
|
|
|
|
stand_alone: yes
|
|
|
pi: [toc, sortrefs, symrefs, comments]
|
|
|
|
|
|
author:
|
|
|
{::include ../shared/author_tags/iraklis_symeonidis.mkd}
|
|
|
{::include ../shared/author_tags/bernie_hoeneisen.mkd}
|
|
|
|
|
|
normative:
|
|
|
RFC4949:
|
|
|
RFC7435:
|
|
|
{::include ../shared/references/unger-sok.mkd}
|
|
|
{::include ../shared/references/pfitzmann-terminology-privacy.mkd}
|
|
|
{::include ../shared/references/tor-timing-attacks.mkd}
|
|
|
{::include ../shared/references/diaz-measuring-anonymity.mkd}
|
|
|
|
|
|
|
|
|
informative:
|
|
|
RFC4880:
|
|
|
# RFC6973:
|
|
|
# RFC7258:
|
|
|
# RFC7942:
|
|
|
RFC8280:
|
|
|
I-D.birk-pep:
|
|
|
# I-D.marques-pep-email:
|
|
|
I-D.birk-pep-trustwords:
|
|
|
# I-D.marques-pep-rating:
|
|
|
# I-D.marques-pep-handshake:
|
|
|
|
|
|
# {::include ../shared/references/ed-keysync.mkd}
|
|
|
# {::include ../shared/references/isoc-btn.mkd}
|
|
|
# {::include ../shared/references/implementation-status.mkd}
|
|
|
|
|
|
|
|
|
--- abstract
|
|
|
|
|
|
{{RFC8280}} has identified and documented important principles in such
|
|
|
as Data Minimization, End-to-End and Interoperability in order to
|
|
|
enable access to Human Rights. While (partial) implementations of
|
|
|
these concepts are already available, today's applications widely lack
|
|
|
Privacy support that ordinary users can easily handle.
|
|
|
This document covers analysis of threats and requirements.
|
|
|
|
|
|
|
|
|
--- middle
|
|
|
|
|
|
# Introduction
|
|
|
|
|
|
{{RFC8280}} has identified and documented important principles in such
|
|
|
as Data Minimization, End-to-End and Interoperability in order to
|
|
|
enable access to Human Rights. While (partial) implementations of
|
|
|
these concepts are already available, today's applications widely lack
|
|
|
Privacy support that ordinary users can easily handle.
|
|
|
|
|
|
In MEDUP these issues are addressed based on Opportunistic Security
|
|
|
{{RFC7435}} principles.
|
|
|
|
|
|
This document covers analysis of threats and requirements.
|
|
|
|
|
|
|
|
|
{::include ../shared/text-blocks/key-words-rfc2119.mkd}
|
|
|
|
|
|
|
|
|
{::include ../shared/text-blocks/terms-intro.mkd}
|
|
|
|
|
|
<!-- {::include ../shared/text-blocks/handshake.mkd} -->
|
|
|
{::include ../shared/text-blocks/trustwords.mkd}
|
|
|
{::include ../shared/text-blocks/tofu.mkd}
|
|
|
{::include ../shared/text-blocks/mitm.mkd}
|
|
|
|
|
|
|
|
|
# Motivation and Background
|
|
|
|
|
|
|
|
|
## Objectives
|
|
|
|
|
|
* An open standard for secure messaging requirements
|
|
|
|
|
|
* Unified evaluation framework: unified goals and threat models
|
|
|
|
|
|
* Common pitfalls
|
|
|
|
|
|
* Future directions on requirements and technologies
|
|
|
|
|
|
* Misleading products on the wild (EFF secure messaging scorecard)
|
|
|
|
|
|
|
|
|
## Known Implementations
|
|
|
|
|
|
### Pretty Easy Privacy (pEp) {#pEp}
|
|
|
|
|
|
To achieve privacy of exchanged messages in an opportunistic way
|
|
|
{{RFC7435}}, the following model (simplified) is proposed by pEp (pretty Easy
|
|
|
Privacy) {{I-D.birk-pep}}:
|
|
|
|
|
|
{::include ../shared/ascii-arts/basic-msg-flow.mkd}
|
|
|
|
|
|
<vspace blankLines="10" />
|
|
|
|
|
|
|
|
|
<!--
|
|
|
pEp is using the paradigm to have online and offline transports.
|
|
|
|
|
|
On offline transport is transporting messages by store and forward. The
|
|
|
connection status of the receiver is not important while sending, and it
|
|
|
may be not available at all. Examples are Internet Mail and SMS.
|
|
|
|
|
|
An online transport is transporting messages synchronously to receivers
|
|
|
if those are online. If receivers are offline, no message can be
|
|
|
transported. The connection status of an receiver is available to the
|
|
|
sender. Examples are Jabber and IRC.
|
|
|
-->
|
|
|
|
|
|
pEp is supposed to solve three problems <!-- for both types of transports -->:
|
|
|
|
|
|
* Key management
|
|
|
* Trust management
|
|
|
* Identity management
|
|
|
|
|
|
pEp is supposed to provide Privacy by Default at least for message
|
|
|
content. In addition, pEp is providing meta data protection. pEp is
|
|
|
meant to be used in already existing messaging solutions.
|
|
|
|
|
|
And pEp is supposed to provide technical data protection by
|
|
|
implementing mix network capabilities.
|
|
|
|
|
|
Additionally, there are use cases for enterprise environments, where
|
|
|
e.g. some instance at the enterprise may need to look into the
|
|
|
messages. Reasons for this include compliance requirements or virus /
|
|
|
malware checking
|
|
|
|
|
|
\[\[ TODO: Decide whether enterprise requirements will be covered
|
|
|
herein \]\]
|
|
|
|
|
|
### Autocrypt
|
|
|
|
|
|
The Autocrypt approach is also a known project following the above
|
|
|
mentioned principles, though - compared to pEp (cf. {{pEp}}) - the
|
|
|
goals slightly differ, for example regarding support of
|
|
|
legacy PGP {{RFC4880}} implementations.
|
|
|
|
|
|
More information on Autocypt can be found on:
|
|
|
https://autocrypt.org/background.html
|
|
|
|
|
|
|
|
|
\[\[ TODO: Input from autocrypt group \]\]
|
|
|
|
|
|
|
|
|
# Basic Functional Requirements
|
|
|
|
|
|
This section outlines the functional requirements. We follow the
|
|
|
requirements extracted from the literature on private emails and
|
|
|
instant messaging {{Unger}}.
|
|
|
|
|
|
* Message: send and receive message(s)
|
|
|
* Multi-device support: synchronisation across multiple devices
|
|
|
* Group messaging: communication of more than 2 users
|
|
|
|
|
|
\[\[ TODO: Add more text on Group Messaging requirements. \]\]
|
|
|
|
|
|
|
|
|
# Threat Analyses
|
|
|
|
|
|
This section describes a set of possible threats. Note that not all threats
|
|
|
can be addressed, due to conflicting requirements.
|
|
|
|
|
|
|
|
|
## Establish Evaluation Criteria for:
|
|
|
|
|
|
* Security and privacy requirements
|
|
|
|
|
|
* Usability (little work on usability and trust establishment)
|
|
|
|
|
|
* Adoption implications
|
|
|
|
|
|
|
|
|
## Focus Areas (Design Challenges):
|
|
|
|
|
|
* Trust establishment: some human interaction
|
|
|
|
|
|
* Conversation security: no human interaction
|
|
|
|
|
|
* Transport privacy: no human interaction
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
# System Model
|
|
|
|
|
|
## Entities
|
|
|
|
|
|
|
|
|
* Users: Sender and receiver(s)
|
|
|
|
|
|
There are the communicating parties such as the sender and receiver of messages.
|
|
|
|
|
|
* Messaging operators and network nodes
|
|
|
|
|
|
Are the servers and network nodes who are responsible for message delivery and synchronization.
|
|
|
|
|
|
* Third parties
|
|
|
|
|
|
It is any other entity who is interacting with the system.
|
|
|
|
|
|
|
|
|
# Problem Areas
|
|
|
|
|
|
## Security Threats and Requirements
|
|
|
|
|
|
### Spoofing and Entity Authentication
|
|
|
|
|
|
An adversary can spoof and impersonate a profile of a user. It may
|
|
|
attempt to send or receive a message on behalf of a legitimate
|
|
|
user. An adversary can be a user of the system gaining access as an
|
|
|
imposter sending or receiving a message. For example, an adversary can
|
|
|
impersonate a valid sender of a message and send it on their
|
|
|
behalf. The capabilities of an adversary are usually local controlling
|
|
|
one entity or a set of entities, in the sense that each spoofed
|
|
|
identity will be used to communicate with different end users. To
|
|
|
mitigate spoofing threats is essential to have entity authentication
|
|
|
mechanisms safeguarding that a user is the legitimate owner of a
|
|
|
messaging service account. For example, it can prove that he/she knows
|
|
|
something such as passwords, posses something such as public key and
|
|
|
have specific features such as biometrics.
|
|
|
|
|
|
### Information Disclosure and Confidentiality
|
|
|
|
|
|
An adversary aims to retrieve and disclose information about the
|
|
|
content of a message. It can attempt to perform a man-in-the-middle
|
|
|
attack (MitM) eavesdropping and forwarding messages as an intermediary
|
|
|
between the communicating users. For example, an adversary can try to
|
|
|
position itself between two communicating parties such as the
|
|
|
messaging server and remain undetectable collecting information
|
|
|
transmitted to the intended users. The capabilities of an adversary
|
|
|
can be from local controlling one point of the communication channel
|
|
|
such as an entity or a communication link of the network. It can also
|
|
|
be a global adversary controlling several entities and communication
|
|
|
links of the channel, gaining the capability of correlating traffic
|
|
|
such as in timing attacks even for end-to-end communication systems
|
|
|
{{Tor}}. Therefore, confidentiality of messages exchanged in the
|
|
|
system should be guaranteed with the use of encryption schemes such as
|
|
|
symmetric, asymmetric, or homomorphic encryption.
|
|
|
|
|
|
|
|
|
### Tampering With Data and Data Authentication
|
|
|
|
|
|
An adversary can tamper with the messages aiming to modify the
|
|
|
information stored or exchanged between the communication entities in
|
|
|
the system. For instance, an adversary may attempt to alter an email
|
|
|
or an instant message by changing the content of them. It can be
|
|
|
anyone but the users who are communicating such as the message
|
|
|
operators, the network node, and third parties. The capabilities of an
|
|
|
adversary can be local controlling an entity that can alter messages
|
|
|
usually performing MitM attack for an encrypted channel. Therefore, no
|
|
|
honest party should accept a message that was modified in transit.
|
|
|
Data authentication of messages needs to be guaranteed such as with
|
|
|
the use of MAC algorithms and digital signatures.
|
|
|
|
|
|
### Repudiation and Accountability (Non-Repudiation)
|
|
|
|
|
|
An adversary can repudiate an email sent or received by providing
|
|
|
falsified information about the status of the message to users of the
|
|
|
system. For instance, an adversary may attempt to state inaccurate
|
|
|
information about an action performed such as about sending or
|
|
|
receiving an email. An adversary can be anyone who is involved in
|
|
|
communicating such as the users of the system, the message operators,
|
|
|
and the network nodes. To mitigate repudiation threats, accountability
|
|
|
and non-repudiation of actions performed must be
|
|
|
guaranteed. Non-repudiation of action can be of origin, submission,
|
|
|
delivery, and receipt providing proof of actions performed to the
|
|
|
intended recipient. It can be achieved with the use of cryptographic
|
|
|
schemes such as digital signatures and audit trails such as
|
|
|
timestamps.
|
|
|
|
|
|
|
|
|
## Privacy Threats and Requirements
|
|
|
|
|
|
### Identifiability -- Anonymity
|
|
|
|
|
|
An adversary can identify a specific user associated with an Items of
|
|
|
Interest (IOI), i.e., an ID of a subject, a message sent, and an
|
|
|
action performed. Identifiability is the state under which a specific
|
|
|
user can be identified from a set of users defined as the
|
|
|
identifiability set. For instance, it may identify the sender of a
|
|
|
message by examining the headers of an email exchanged within a
|
|
|
system. An adversary can be anyone but the users who are communicating
|
|
|
such as the message operators, the network node or third parties. To
|
|
|
mitigate identifiability threats, the anonymity of users must be
|
|
|
guaranteed. It is defined as the "Anonymity of a subject from an
|
|
|
attacker’s perspective means that the attacker cannot sufficiently
|
|
|
identify the subject within a set of subjects, the anonymity set"
|
|
|
{{Pfitzmann}}. Essentially, to enable anonymity, there is always need
|
|
|
to be a set of possible subjects such that for an adversary the
|
|
|
communicating user can be equally likely of any other user in the set
|
|
|
{{Diaz}}. Thus, an adversary cannot deduce who is the originator of a
|
|
|
message. Anonymity can be achieved with the use of pseudonyms and
|
|
|
cryptographic schemes such as anonymous remailers (i.e., mixnets),
|
|
|
anonymous communications channels (e.g., Tor), and secret sharing.
|
|
|
|
|
|
### Linkability -- Unlinkability
|
|
|
|
|
|
An adversary can sufficiently distinguish within the system, whether
|
|
|
two or more Items of Interest (IOI) such as subjects, objects,
|
|
|
messages, and actions are linked to the same user. For instance, an
|
|
|
adversary can relate pseudonyms from messages exchanged and deduce
|
|
|
whether it is the same user who sent the messages. It can be anyone
|
|
|
but the users who are communicating such as the message operators, the
|
|
|
network node, or third parties. Therefore, unlinkability of IOIs
|
|
|
should be guaranteed with the use of pseudonyms and cryptographic
|
|
|
schemes such as anonymous credentials.
|
|
|
|
|
|
|
|
|
### Detectability and observatility -- Unditectability
|
|
|
|
|
|
An adversary can sufficiently detect an IOI such as messages exchanged
|
|
|
within the system from random noise. For instance, an adversary can
|
|
|
detect a specific IOI when a user is sending a message from a set of
|
|
|
communicating users. An adversary can be anyone but the users who are
|
|
|
communicating such as the message operators, the network node or third
|
|
|
parties. In contrast to anonymity and unlinkability, where the
|
|
|
relationship from an IOI to a user is preserved, undetectability is
|
|
|
defined as "Undetectability of an item of interest (IOI) from an
|
|
|
attacker’s perspective means that the attacker cannot sufficiently
|
|
|
distinguish whether it exists or not." {{Pfitzmann}}. Undetectability
|
|
|
of IOIs can be guaranteed with the use of cryptographic schemes such
|
|
|
as Mix-nets and obfuscation mechanisms such as dummy traffic.
|
|
|
|
|
|
|
|
|
## Information disclosure -- confidentiality
|
|
|
|
|
|
An adversary can disclose information exchanged within the system
|
|
|
about users. It can perform a MitM aiming to learn the contents of a
|
|
|
message the metadata information such as with whom someone is
|
|
|
communicating with and with which frequency. It can be anyone but the
|
|
|
users who are communicating such as the messaging server, and the
|
|
|
network nodes. The capabilities of an adversary can be local
|
|
|
controlling one entity or channel of the network to a global adversary
|
|
|
which can control several entities and communication
|
|
|
links. Confidentiality of messages, together with security, needs to
|
|
|
be guaranteed with the use of cryptographic operations such as secret
|
|
|
sharing, symmetric, asymmetric, or homomorphic encryption.
|
|
|
|
|
|
|
|
|
## Non-repudiation and deniability
|
|
|
|
|
|
In contrast to security, non-repudiation can be a threat to a user's
|
|
|
privacy for messaging systems. An adversary may attempt to collect
|
|
|
evidence exchanged in the system aiming to prove to others that a
|
|
|
specific user is the originator of a specific message. That can be
|
|
|
problematic for users as whistle-blowers in countries where censorship
|
|
|
is a daily routine and to countries where human life can be at
|
|
|
stake. Therefore plausible deniability, unlike non-repudiation, must
|
|
|
be guaranteed where the system guarantees that an adversary cannot
|
|
|
confirm either contradict that a specific user has sent a
|
|
|
message. Deniability can be guaranteed with the use of cryptographic
|
|
|
schemes such as off-the-record messages.
|
|
|
|
|
|
|
|
|
<!-- =================================================================== -->
|
|
|
|
|
|
# Specific Security and Privacy Requirements
|
|
|
|
|
|
## Messages Exchange
|
|
|
|
|
|
### Send Message
|
|
|
|
|
|
* Send encrypted and signed message to another peer
|
|
|
|
|
|
* Send unencrypted and unsigned message to another peer
|
|
|
|
|
|
Note: Subcases of sending messages are outlined in
|
|
|
{{subcases-for-sending-messages}}.
|
|
|
|
|
|
### Receive Message
|
|
|
|
|
|
* Receive encrypted and signed message from another peer
|
|
|
|
|
|
* Receive encrypted, but not signed message from another peer
|
|
|
|
|
|
* Receive signed, but not encrypted message from another peer
|
|
|
|
|
|
* Receive unencrypted and unsigned message from another peer
|
|
|
|
|
|
Note: Subcases of receiving messages are outlined in
|
|
|
{{subcases-for-receiving-messages}}.
|
|
|
|
|
|
|
|
|
## Trust Management
|
|
|
|
|
|
* Trust rating of a peer is updated (locally) when:
|
|
|
|
|
|
* Public Key is received the first time
|
|
|
|
|
|
* Trustwords have been compared successfully and confirmed by user
|
|
|
(see above)
|
|
|
|
|
|
* Trust of a peer is revoked (cf. {{key-management}}, Key Reset)
|
|
|
|
|
|
* Trust of a public key is synchronized among different devices of the
|
|
|
same user
|
|
|
|
|
|
Note: Synchronization management (such as establish or revoke trust)
|
|
|
among a user's own devices is described in
|
|
|
{{synchronization-management}}
|
|
|
|
|
|
|
|
|
## Key Management
|
|
|
|
|
|
* New Key pair is generated automatically (if none found) at startup
|
|
|
|
|
|
* Public Key is sent to peer by attaching it to messages
|
|
|
|
|
|
* Public Key received by a peer is stored locally
|
|
|
|
|
|
* Key pair is declared invalid and other peers are informed (Key Reset)
|
|
|
|
|
|
* Public Key is marked invalid after receiving a key reset message
|
|
|
|
|
|
* Public Keys are are of peers are synchronized among different
|
|
|
devices of the same user
|
|
|
|
|
|
* Private Key is synchronized among different devices of the same user
|
|
|
|
|
|
Note: Synchronization management (such as establish or revoke trust)
|
|
|
among a user's own devices is described in
|
|
|
{{synchronization-management}}
|
|
|
|
|
|
|
|
|
## Synchronization Management
|
|
|
|
|
|
A device group is a group of devices of the same user that share the
|
|
|
same key pairs in order to synchronize data among them. In a device
|
|
|
group devices of the same user mutually grant authentication.
|
|
|
|
|
|
* Form a device group of two (yet ungrouped) devices of the same user
|
|
|
|
|
|
<!--
|
|
|
Note: Preconditions for forming a device group are outlined in
|
|
|
{{preconditions-for-forming-a-device-group}}.
|
|
|
-->
|
|
|
|
|
|
* Join another device of the same user to existing device group
|
|
|
|
|
|
* Leave device group
|
|
|
|
|
|
* Remove other device from device group
|
|
|
|
|
|
|
|
|
## Identity Management
|
|
|
|
|
|
* All involved parties share the same identity system
|
|
|
|
|
|
|
|
|
## User Interface
|
|
|
|
|
|
* Need for user interaction is kept to the minimum necessary
|
|
|
|
|
|
* The privacy status of a peer is presented to the user by a color rating
|
|
|
|
|
|
* The privacy status of a message is presented to the user by a color rating
|
|
|
|
|
|
* The color rating is defined by a traffic-light semantics
|
|
|
|
|
|
|
|
|
# Subcases
|
|
|
|
|
|
<!-- Do we need this section at all? -->
|
|
|
|
|
|
## Interaction States
|
|
|
|
|
|
The basic model consists of three different interaction states, i.e.:
|
|
|
|
|
|
1. Both peers have got no public key of each other, no trust possible
|
|
|
|
|
|
2. Only one peer has got the public key of the other peer, but no trust
|
|
|
|
|
|
3. Only one peer has got the public key of the other peer and trusts
|
|
|
that public key
|
|
|
|
|
|
4. Both peers have got the public key of each other, but no trust
|
|
|
|
|
|
5. Both peers have got the public key of each other, but only one peer
|
|
|
trusts the other peer's public key
|
|
|
|
|
|
6. Both peers have got the public key of each other and both peers
|
|
|
trust each others public key
|
|
|
|
|
|
|
|
|
The following table shows the different interaction states possible:
|
|
|
|
|
|
| state | Peer's Public Key available | My Public Key available to Peer | Peer Trusted | Peer trusts me |
|
|
|
| ------|:---------------------------:|:-------------------------------:|:------------:|:--------------:|
|
|
|
| 1. | no | no | N/A | N/A |
|
|
|
| 2a. | no | yes | N/A | no |
|
|
|
| 2b. | yes | no | no | N/A |
|
|
|
| 3a. | no | yes | N/A | yes |
|
|
|
| 3b. | yes | no | yes | N/A |
|
|
|
| 4. | yes | yes | no | no |
|
|
|
| 5a. | yes | yes | no | yes |
|
|
|
| 5b. | yes | yes | yes | no |
|
|
|
| 6. | yes | yes | yes | yes |
|
|
|
|
|
|
|
|
|
In the simplified model, only interaction states 1, 2, 4 and 6 are
|
|
|
depicted. States 3 and 5 may result from e.g. key mistrust or abnormal
|
|
|
user behavior.
|
|
|
|
|
|
Note: As one peer may have several keys or if group conversations are
|
|
|
involved, things will get more complex. For the time being, we focus on
|
|
|
bilateral interactions, whereas group interactions are split up into
|
|
|
several bilateral interactions.
|
|
|
|
|
|
|
|
|
## Subcases for Sending Messages
|
|
|
|
|
|
* If peer's Public Key not available (Interaction States 1, 2a, and 3a)
|
|
|
|
|
|
* Send message Unencrypted (and not Signed)
|
|
|
|
|
|
* If peer's Public Key available (Interaction States 2b, 3b, 4, 5a, 5b, 6)
|
|
|
|
|
|
* Send message Encrypted and Signed
|
|
|
|
|
|
|
|
|
## Subcases for Receiving Messages
|
|
|
|
|
|
* If peer's Public Key not available (Interaction States 1, 2a, and 3a)
|
|
|
|
|
|
* If message is signed
|
|
|
|
|
|
* ignore signature
|
|
|
|
|
|
* If message is encrypted
|
|
|
|
|
|
* decrypt with caution
|
|
|
|
|
|
* If message not encrypted
|
|
|
|
|
|
* No further processing regarding encryption
|
|
|
|
|
|
* If peer's Public Key available or can be retrieved from received message
|
|
|
(Interaction States 2b, 3b, 4, 5a, 5b, 6)
|
|
|
|
|
|
* If message is signed
|
|
|
|
|
|
* verify signature
|
|
|
|
|
|
* If message is encrypted
|
|
|
|
|
|
* Decrypt
|
|
|
|
|
|
* If message not encrypted
|
|
|
|
|
|
* No further processing regarding encryption
|
|
|
|
|
|
* If message not signed
|
|
|
|
|
|
* If message is encrypted
|
|
|
|
|
|
* exception
|
|
|
|
|
|
* If message not encrypted
|
|
|
|
|
|
* No further processing regarding encryption
|
|
|
|
|
|
|
|
|
|
|
|
<!-- =================================================================== -->
|
|
|
|
|
|
|
|
|
# Security Considerations
|
|
|
|
|
|
Relevant security considerations are outlined in {{security-threats-and-requirements}}.
|
|
|
|
|
|
|
|
|
# Privacy Considerations
|
|
|
|
|
|
Relevant privacy considerations are outlined in {{privacy-threats-and-requirements}}.
|
|
|
|
|
|
|
|
|
|
|
|
# IANA Considerations
|
|
|
|
|
|
This document requests no action from IANA.
|
|
|
|
|
|
\[\[ RFC Editor: This section may be removed before publication. \]\]
|
|
|
|
|
|
|
|
|
# Acknowledgements
|
|
|
|
|
|
|
|
|
The authors would like to thank the following people who have provided
|
|
|
feedback or significant contributions to the development of this
|
|
|
document: Volker Birk
|
|
|
\[\[ TODO: Forename Surname, Forename2 Surname2, ...\]\]
|
|
|
|
|
|
lorem ipsum
|
|
|
|
|
|
--- back
|
|
|
|
|
|
# Document Changelog
|
|
|
|
|
|
\[\[ RFC Editor: This section is to be removed before publication \]\]
|
|
|
|
|
|
* draft-symeonidis-medup-requirements-00:
|
|
|
* Initial version
|
|
|
|
|
|
# Open Issues
|
|
|
|
|
|
\[\[ RFC Editor: This section should be empty and is to be removed
|
|
|
before publication \]\]
|
|
|
|
|
|
* Add references to used materials (in particular threat analyses part)
|
|
|
|
|
|
* Get content from Autocrypt ({{autocrypt}})
|
|
|
|
|
|
* Add more text on Group Messaging requirements
|
|
|
|
|
|
* Decide on whether or not "enterprise requirement" will go to this
|
|
|
document
|