p≡p I-Ds (IETF Internet-Drafts)
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

789 lines
26 KiB

3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
  1. ---
  2. coding: utf-8
  3. title: "Requirements for Private Messaging"
  4. abbrev: "Private Messaging Requirements"
  5. docname: draft-symeonidis-medup-requirements-01
  6. category: std
  7. stand_alone: yes
  8. pi: [toc, sortrefs, symrefs, comments]
  9. author:
  10. {::include ../shared/author_tags/iraklis_symeonidis.mkd}
  11. {::include ../shared/author_tags/bernie_hoeneisen.mkd}
  12. normative:
  13. RFC4949:
  14. RFC7435:
  15. informative:
  16. RFC4880:
  17. RFC6973:
  18. # RFC7258:
  19. # RFC7942:
  20. RFC8280:
  21. I-D.birk-pep:
  22. # I-D.marques-pep-email:
  23. I-D.birk-pep-trustwords:
  24. # I-D.marques-pep-rating:
  25. # I-D.marques-pep-handshake:
  26. # I-D.hoeneisen-pep-keysync:
  27. I-D.draft-symeonidis-pearg-private-messaging-threat-analysis:
  28. {::include ../shared/references/unger-sok.mkd}
  29. {::include ../shared/references/pfitzmann-terminology-privacy.mkd}
  30. {::include ../shared/references/tor-timing-attacks.mkd}
  31. {::include ../shared/references/diaz-measuring-anonymity.mkd}
  32. {::include ../shared/references/ermoshina-end2end-enc.mkd}
  33. {::include ../shared/references/clark-seuring-email.mkd}
  34. # {::include ../shared/references/isoc-btn.mkd}
  35. # {::include ../shared/references/implementation-status.mkd}
  36. --- abstract
  37. {{RFC8280}} has identified and documented important principles, such
  38. as Data Minimization, End-to-End, and Interoperability in order to
  39. enable access to fundamental Human Rights. While (partial)
  40. implementations of these concepts are already available, many current
  41. applications lack Privacy support that the average user can easily
  42. navigate. This document covers analysis of threats to privacy and
  43. security and derives requirements from this threat analysis.
  44. --- middle
  45. # Introduction
  46. {{RFC8280}} has identified and documented important principles, such
  47. as Data Minimization, End-to-End, and Interoperability in order to
  48. enable access to fundamental Human Rights. While (partial)
  49. implementations of these concepts are already available, many current
  50. applications lack Privacy support that the average user can easily
  51. navigate.
  52. In MEDUP these issues are addressed based on Opportunistic Security
  53. {{RFC7435}} principles.
  54. This documents covers analysis of threats to privacy and
  55. security and derives requirements from this threat analysis.
  56. \[\[ TODO: Rewrite after split \]\]
  57. {::include ../shared/text-blocks/key-words-rfc2119.mkd}
  58. {::include ../shared/text-blocks/terms-intro.mkd}
  59. <!-- {::include ../shared/text-blocks/handshake.mkd} -->
  60. {::include ../shared/text-blocks/trustwords.mkd}
  61. {::include ../shared/text-blocks/tofu.mkd}
  62. {::include ../shared/text-blocks/mitm.mkd}
  63. <!--
  64. **[Iraklis]: I copied [nk] feedback here below. It is a very valid
  65. comment; thinking were that text can have a better fit in the RFC.
  66. **[nk] I would encourage to add one para saying that the threat model
  67. also depends on the background of the user (in specific if lives are
  68. at stake), e.g., investigative journalists, whistle-blowers or
  69. dissidents from repressive countries do have much more severe
  70. requirements than organisaions than "normal" people from the broad
  71. public. Yet, Privacy and Security on the Internet always are Human
  72. Rights and technological solutions should enable easily usable means
  73. to protect. But if somebody requires stronger protection, usability
  74. may be a second priority in favour of solutions offering a more
  75. profound protection (also hardware questions factor in more in these
  76. latter cases).
  77. -->
  78. # Motivation and Background
  79. ## Objectives
  80. * An open standard for secure messaging requirements
  81. * Unified evaluation framework: unified goals and threat models
  82. * Common pitfalls
  83. * Future directions on requirements and technologies
  84. * Misleading products on the wild (EFF secure messaging scorecard)
  85. ## Known Implementations
  86. ### Pretty Easy Privacy (pEp) {#pEp}
  87. To achieve privacy of exchanged messages in an opportunistic way
  88. {{RFC7435}}, the following model (simplified) is proposed by pEp
  89. (pretty Easy Privacy) {{I-D.birk-pep}}:
  90. {::include ../shared/ascii-arts/basic-msg-flow.mkd}
  91. <vspace blankLines="10" />
  92. <!-- pEp is using the paradigm to have online and offline transports.
  93. On offline transport is transporting messages by store and
  94. forward. The connection status of the receiver is not important while
  95. sending, and it may be not available at all. Examples are Internet
  96. Mail and SMS.
  97. An online transport is transporting messages synchronously to
  98. receivers if those are online. If receivers are offline, no message
  99. can be transported. The connection status of an receiver is available
  100. to the sender. Examples are Jabber and IRC. -->
  101. pEp is intended to solve three problems <!-- for both types of
  102. transports -->:
  103. * Key management
  104. * Trust management
  105. * Identity management
  106. pEp is intended to be used in pre-existing messaging solutions and
  107. provide Privacy by Default, at a minimum, for message content. In
  108. addition, pEp provides technical data protection including metadata
  109. protection.
  110. An additional set of use cases applies to enterprise environments
  111. only. In some instances, the enterprise may require access to message
  112. content. Reasons for this may include the need to conform to
  113. compliance requirements or virus/malware defense.
  114. ### Autocrypt
  115. Another known approach in this area is Autocrypt. Compared to pEp
  116. (cf. {{pEp}}) - there are certain differences, for example, regarding
  117. the prioritization of support for legacy PGP {{RFC4880}}
  118. implementations.
  119. More information on Autocrypt can be found on:
  120. https://autocrypt.org/background.html
  121. \[\[ TODO: Input from autocrypt group \]\]
  122. ## Focus Areas (Design Challenges):
  123. * Trust establishment: some human interaction
  124. * Conversation security: no human interaction
  125. * Transport privacy: no human interaction
  126. # System Model
  127. ## Entities
  128. * Users, sender and receiver(s): The communicating parties who
  129. exchange messages, typically referred to as senders and receivers.
  130. * Messaging operators and network nodes: The communicating service
  131. providers and network nodes that are responsible for message
  132. delivery and synchronization.
  133. * Third parties: Any other entity who interacts with the messaging
  134. system.
  135. ## Basic Functional Requirements
  136. This section outlines the functional requirements. We follow the
  137. requirements extracted from the literature on private emails and
  138. instant messaging {{Unger}} {{Ermoshina}} {{Clark}}.
  139. * Message: send and receive message(s)
  140. * Multi-device support: synchronization across multiple devices
  141. * Group messaging: communication of more than 2 users
  142. \[\[ TODO: Add more text on Group Messaging requirements. \]\]
  143. # Threat Analyses
  144. This section describes a set of possible threats. Note that not all
  145. threats can be addressed, due to conflicting requirements.
  146. ## Adversarial model
  147. An adversary is any entity who leverages threats against the
  148. communication system, whose goal is to gain improper access to the
  149. message content and users' information. They can be anyone who is
  150. involved in communication, such as users of the system, message
  151. operators, network nodes, or even third parties.
  152. * Internal - external: An adversary can seize control of entities
  153. within the system, such as extracting information from a specific
  154. entity or preventing a message from being sent. An external
  155. adversary can only compromise the communication channels themselves,
  156. eavesdropping and tampering with messaging such as performing
  157. Man-in-the-Middle (MitM) attacks.
  158. * Local - global: A local adversary can control one entity that is
  159. part of a system, while a global adversary can seize control of
  160. several entities in a system. A global adversary can also monitor
  161. and control several parts of the network, granting them the ability
  162. to correlate network traffic, which is crucial in performing timing
  163. attacks.
  164. * Passive - active: A passive attacker can only eavesdrop and extract
  165. information, while an active attacker can tamper with the messages
  166. themselves, such as adding, removing, or even modifying them.
  167. Attackers can combine these adversarial properties in a number of
  168. ways, increasing the effectiveness - and probable success - of their
  169. attacks. For instance, an external global passive attacker can
  170. monitor multiple channels of a system, while an internal local active
  171. adversary can tamper with the messages of a targeted messaging
  172. provider {{Diaz}}.
  173. ## Security Threats and Requirements
  174. ### Spoofing and Entity Authentication
  175. Spoofing occurs when an adversary gains improper access to the system
  176. upon successfully impersonating the profile of a valid user. The
  177. adversary may also attempt to send or receive messages on behalf of
  178. that user. The threat posed by an adversary's spoofing capabilities is
  179. typically based on the local control of one entity or a set of
  180. entities, with each compromised account typically is used to
  181. communicate with different end-users. In order to mitigate spoofing
  182. threats, it is essential to have entity authentication mechanisms in
  183. place that will verify that a user is the legitimate owner of a
  184. messaging service account. The entity authentication mechanisms
  185. typically rely on the information or physical traits that only the
  186. valid user should know/possess, such as passwords, valid public keys,
  187. or biometric data like fingerprints.
  188. ### Information Disclosure and Confidentiality
  189. An adversary aims to eavesdrop and disclose information about the
  190. content of a message. They can attempt to perform a man-in-the-middle
  191. attack (MitM). For example, an adversary can attempt to position
  192. themselves between two communicating parties, such as gaining access
  193. to the messaging server and remain undetectable while collecting
  194. information transmitted between the intended users. The threat posed
  195. by an adversary can be from local gaining control of one point of a
  196. communication channel such as an entity or a communication link within
  197. the network. The adversarial threat can also be broader in scope, such
  198. as seizing global control of several entities and communication links
  199. within the channel. That grants the adversary the ability to correlate
  200. and control traffic in order to execute timing attacks, even in the
  201. end-to-end communication systems {{Tor}}. Therefore, confidentiality
  202. of messages exchanged within a system should be guaranteed with the
  203. use of encryption schemes
  204. ### Tampering With Data and Data Authentication
  205. An adversary can also modify the information stored and exchanged
  206. between the communication entities in the system. For instance, an
  207. adversary may attempt to alter an email or an instant message by
  208. changing the content of them. As a result, it can be anyone but the
  209. users who are communicating, such as the message operators, the
  210. network node, or third parties. The threat posed by an adversary can
  211. be in gaining local control of an entity which can alter messages,
  212. usually resulting in a MitM attack on an encrypted channel. Therefore,
  213. no honest party should accept a message that was modified in
  214. transit. Data authentication of messages exchanged needs to be
  215. guaranteed, such as with the use of Message Authentication Code (MAC)
  216. and digital signatures.
  217. ### Repudiation and Accountability (Non-Repudiation)
  218. Adversaries can repudiate, or deny, the status of the message to users
  219. of the system. For instance, an adversary may attempt to provide
  220. inaccurate information about an action performed, such as about
  221. sending or receiving an email. An adversary can be anyone who is
  222. involved in communicating, such as the users of the system, the
  223. message operators, and the network nodes. To mitigate repudiation
  224. threats, accountability, and non-repudiation of actions performed must
  225. be guaranteed. Non-repudiation of action can include proof of origin,
  226. submission, delivery, and receipt between the intended
  227. users. Non-repudiation can be achieved with the use of cryptographic
  228. schemes such as digital signatures and audit trails such as
  229. timestamps.
  230. ## Privacy Threats and Requirements
  231. ### Identifiability -- Anonymity
  232. Identifiability is defined as the extent to which a specific user can
  233. be identified from a set of users, which is the identifiability
  234. set. Identification is the process of linking information to allow the
  235. inference of a particular user's identity {{RFC6973}}. An adversary
  236. can identify a specific user associated with Items of Interest (IOI),
  237. which include items such as the ID of a subject, a sent message, or an
  238. action performed. For instance, an adversary may identify the sender
  239. of a message by examining the headers of a message exchanged within a
  240. system. To mitigate identifiability threats, the anonymity of users
  241. must be guaranteed. Anonymity is defined from the attackers
  242. perspective as the "the attacker cannot sufficiently identify the
  243. subject within a set of subjects, the anonymity set"
  244. {{Pfitzmann}}. Essentially, in order to make anonymity possible, there
  245. always needs to be a set of possible users such that for an adversary
  246. the communicating user is equally likely to be of any other user in
  247. the set {{Diaz}}. Thus, an adversary cannot identify who is the sender
  248. of a message. Anonymity can be achieved with the use of pseudonyms and
  249. cryptographic schemes such as anonymous remailers (i.e., mixnets),
  250. anonymous communications channels (e.g., Tor), and secret sharing.
  251. ### Linkability -- Unlinkability
  252. Linkability occurs when an adversary can sufficiently distinguish
  253. within a given system that two or more IOIs such as subjects (i.e.,
  254. users), objects (i.e., messages), or actions are related to each other
  255. {{Pfitzmann}}. For instance, an adversary may be able to relate
  256. pseudonyms by analyzing exchanged messages and deduce that the
  257. pseudonyms belong to one user (though the user may not necessarily be
  258. identified in this process). Therefore, unlinkability of IOIs should
  259. be guaranteed through the use of pseudonyms as well as cryptographic
  260. schemes such as anonymous credentials.
  261. ### Detectability and Observability -- Undetectability
  262. Detectability occurs when an adversary is able to sufficiently
  263. distinguish an IOI, such as messages exchanged within the system, from
  264. random noise {{Pfitzmann}}. Observability occurs when that
  265. detectability occurs along with a loss of anonymity for the entities
  266. within that same system. An adversary can exploit these states in
  267. order to infer linkability and possibly identification of users within
  268. a system. Therefore, undetectability of IOIs should be guaranteed,
  269. which also ensures unobservability. Undetectability for an IOI is
  270. defined as that "the attacker cannot sufficiently distinguish whether
  271. it exists or not." {{Pfitzmann}}. Undetectability can be achieved
  272. through the use of cryptographic schemes such as mix-nets and
  273. obfuscation mechanisms such as the insertion of dummy traffic within a
  274. system.
  275. ## Information Disclosure -- Confidentiality
  276. Information disclosure -- or loss of confidentiality -- about users,
  277. message content, metadata or other information is not only a security
  278. but also a privacy threat that a communicating system can face. For
  279. example, a successful MitM attack can yield metadata that can be used
  280. to determine with whom a specific user communicates with, and how
  281. frequently. To guarantee the confidentiality of messages and prevent
  282. information disclosure, security measures need to be guaranteed with
  283. the use of cryptographic schemes such as symmetric, asymmetric or
  284. homomorphic encryption and secret sharing.
  285. ## Non-repudiation and Deniability
  286. Non-repudiation can be a threat to a user's privacy for private
  287. messaging systems, in contrast to security. As discussed in section
  288. 6.1.4, non-repudiation should be guaranteed for users. However,
  289. non-repudiation carries a potential threat vector in itself when it is
  290. used against a user in certain instances. For example, whistle-blowers
  291. may find non-repudiation used against them by adversaries,
  292. particularly in countries with strict censorship policies and in cases
  293. where human lives are at stake. Adversaries in these situations may
  294. seek to use shreds of evidence collected within a communication system
  295. to prove to others that a whistle-blowing user was the originator of a
  296. specific message. Therefore, plausible deniability is essential for
  297. these users, to ensure that an adversary can neither confirm nor
  298. contradict that a specific user sent a particular message. Deniability
  299. can be guaranteed through the use of cryptographic protocols such as
  300. off-the-record messaging.
  301. <!-- =================================================================== -->
  302. <vspace blankLines="4" />
  303. \[\[ TODO: Describe relation of the above introduced Problem Areas to
  304. scope of MEDUP \]\]
  305. <!--
  306. # Scope of MEDUP
  307. As some of the above introduced Problem Areas {{problem-areas}}
  308. conflict with each other or other ares of MEDUP, those need to
  309. prioritized.
  310. ## Problem Areas addressed by MEDUP
  311. In MEDUP the following of the above introduced Problem Areas
  312. {{problem-areas}} will be addressed primarily:
  313. \[\[TODO: Move some of this this to next subsection \]\]
  314. * Spoofing and Entity Authentication
  315. (cf. {{spoofing-and-entity-authentication}})
  316. * Information Disclosure and Confidentiality
  317. (cf. {{information-disclosure-and-confidentiality}})
  318. * Tampering With Data and Data Authentication
  319. (cf. {{tampering-with-data-and-data-authentication}})
  320. * Repudiation and Accountability (Non-Repudiation)
  321. (cf. {{repudiation-and-accountability-non-repudiation}})
  322. * Identifiability -\- Anonymity (cf. {{identifiability-anonymity}})
  323. * Linkability -\- Unlinkability (cf. {{linkability-unlinkability}})
  324. * Detectability and Observatility -\- Unditectability
  325. (cf. {{detectability-and-observatility-unditectability}})
  326. * Information Disclosure -\- Confidentiality
  327. (cf. {{information-disclosure-confidentiality}})
  328. * Non-Repudiation and Deniability
  329. (cf. {{non-repudiation-and-deniability}})
  330. ## Problem Areas addressed by MEDUP
  331. The following of the above introduced Problem Areas {{problem-areas}}
  332. MAY be addressed in MEDUP only to the extent as they are not in
  333. conflict with the Problem Areas addressed by MEDUP.
  334. * ...
  335. \[\[TODO: Move some of this this from previous subsection \]\]
  336. -->
  337. <!-- =================================================================== -->
  338. # Specific Security and Privacy Requirements
  339. \[\[ This section is still in early draft state, to be substantially
  340. improved in future revisions. Among other things, there needs to be
  341. clearer distinction between MEDUP requirements, and those of a
  342. specific implementation.
  343. \]\]
  344. ## Messages Exchange
  345. ### Send Message
  346. * Send encrypted and signed message to another peer
  347. * Send unencrypted and unsigned message to another peer
  348. Note: Subcases of sending messages are outlined in
  349. {{subcases-for-sending-messages}}.
  350. ### Receive Message
  351. * Receive encrypted and signed message from another peer
  352. * Receive encrypted, but unsigned message from another peer
  353. * Receive signed, but unencrypted message from another peer
  354. * Receive unencrypted and unsigned message from another peer
  355. Note: Subcases of receiving messages are outlined in
  356. {{subcases-for-receiving-messages}}.
  357. ## Trust Management
  358. * Trust rating of a peer is updated (locally) when:
  359. * Public Key is received the first time
  360. * Trustwords have been compared successfully and confirmed by user
  361. (see above)
  362. * Trust of a peer is revoked (cf. {{key-management}}, Key Reset)
  363. * Trust of a public key is synchronized among different devices of the
  364. same user
  365. Note: Synchronization management (such as the establishment or
  366. revocation of trust) among a user's own devices is described in
  367. {{synchronization-management}}
  368. ## Key Management
  369. * New Key pair is automatically generated at startup if none are found.
  370. * Public Key is sent to peer via message attachment
  371. * Once received, Public Key is stored locally
  372. * Key pair is declared invalid and other peers are informed (Key Reset)
  373. * Public Key is marked invalid after receiving a key reset message
  374. * Public Keys of peers are synchronized among a user's devices
  375. * Private Keys are synchronized among a user's devices
  376. Note: Synchronization management (such as establish or revoke trust)
  377. among a user's own devices is described in
  378. {{synchronization-management}}
  379. ## Synchronization Management
  380. A device group is comprised of devices belonging to one user, which
  381. share the same key pairs in order to synchronize data among them. In
  382. a device group, devices of the same user mutually grant
  383. authentication.
  384. * Form a device group of two (yet ungrouped) devices of the same user
  385. <!--
  386. Note: Preconditions for forming a device group are outlined in
  387. {{preconditions-for-forming-a-device-group}}.
  388. -->
  389. * Add another device of the same user to existing device group
  390. * Leave device group
  391. * Remove other device from device group
  392. ## Identity Management
  393. * All involved parties share the same identity system
  394. ## User Interface
  395. \[\[ TODO \]\]
  396. <!--
  397. * Need for user interaction is kept to the minimum necessary
  398. * The privacy status of a peer is presented to the user in a easily
  399. understandable way, e.g. by a color rating
  400. * The privacy status of a message is presented to the user in a easily
  401. understandable way, e.g. by a color rating
  402. * The color rating is defined by a traffic-light semantics
  403. \[\[ TODO: rewrite "in a easily understandable way} \]\]
  404. -->
  405. # Subcases
  406. <!-- Do we need this section at all? -->
  407. ## Interaction States
  408. The basic model consists of different interaction states:
  409. <!-- nk] you might consider changing the numbers, to a three-fold
  410. system (for 3 different cases). And repeat the same numbering with
  411. differentiating them in the subcases in the table below. Also it would
  412. be good to have some explanation when or why these cases occur, maybe
  413. link them with a timeline of starting and establishing a conversation
  414. as well as an explicit reference to the TOFU concept. -->
  415. 1. Both peers have no public key of each other, no trust possible
  416. 2. Only one peer has the public key of the other peer, but no trust
  417. 3. Only one peer has the public key of the other peer and trusts
  418. that public key
  419. 4. Both peers have the public key of each other, but no trust
  420. 5. Both peers have exchanged public keys, but only one peer trusts the
  421. other peer's public key
  422. 6. Both peers have exchanged public keys, and both peers trust the
  423. other's public key
  424. The following table shows the different interaction states possible:
  425. | state | Peer's Public Key available | My Public Key available to Peer | Peer Trusted | Peer trusts me |
  426. | ------|:---------------------------:|:-------------------------------:|:------------:|:--------------:|
  427. | 1. | no | no | N/A | N/A |
  428. | 2a. | no | yes | N/A | no |
  429. | 2b. | yes | no | no | N/A |
  430. | 3a. | no | yes | N/A | yes |
  431. | 3b. | yes | no | yes | N/A |
  432. | 4. | yes | yes | no | no |
  433. | 5a. | yes | yes | no | yes |
  434. | 5b. | yes | yes | yes | no |
  435. | 6. | yes | yes | yes | yes |
  436. In the simplified model, only interaction states 1, 2, 4 and 6 are
  437. depicted. States 3 and 5 may result from e.g. key mistrust or abnormal
  438. user behavior. Interaction states 1, 2 and 4 are part of
  439. TOFU. For a better understanding, you may consult the figure in
  440. {{pEp}} above.
  441. Note: In situations where one peer has multiple key pairs, or group
  442. conversations are occurring, interaction states become increasingly
  443. complex. For now, we will focus on a single bilateral interaction
  444. between two peers, each possessing a single key pair.
  445. \[\[ Note: Future versions of this document will address more complex
  446. cases \]\]
  447. ## Subcases for Sending Messages
  448. * If peer's Public Key not available (Interaction States 1, 2a, and
  449. 3a)
  450. * Send message Unencrypted (and unsigned)
  451. * If peer's Public Key available (Interaction States 2b, 3b, 4, 5a,
  452. 5b, 6)
  453. * Send message Encrypted and Signed
  454. ## Subcases for Receiving Messages
  455. * If peer's Public Key not available (Interaction States 1, 2a, and
  456. 3a)
  457. * If message is signed
  458. * ignore signature
  459. * If message is encrypted
  460. * decrypt with caution
  461. * If message unencrypted
  462. * No further processing regarding encryption
  463. * If peer's Public Key available or can be retrieved from received
  464. message (Interaction States 2b, 3b, 4, 5a, 5b, 6)
  465. * If message is signed
  466. * verify signature
  467. * If message is encrypted
  468. * Decrypt
  469. * If message unencrypted
  470. * No further processing regarding encryption
  471. * If message unsigned
  472. * If message is encrypted
  473. * exception
  474. * If message unencrypted
  475. * No further processing regarding encryption
  476. <!-- =================================================================== -->
  477. # Security Considerations
  478. Relevant security considerations are outlined in
  479. {{security-threats-and-requirements}}.
  480. # Privacy Considerations
  481. Relevant privacy considerations are outlined in
  482. {{privacy-threats-and-requirements}}.
  483. # IANA Considerations
  484. This document requests no action from IANA.
  485. \[\[ RFC Editor: This section may be removed before publication. \]\]
  486. # Acknowledgments
  487. The authors would like to thank the following people who have provided
  488. feedback or significant contributions to the development of this
  489. document: Athena Schumacher, Claudio Luck, Hernani Marques, Kelly
  490. Bristol, Krista Bennett, and Nana Karlstetter.
  491. --- back
  492. # Document Changelog
  493. \[\[ RFC Editor: This section is to be removed before publication \]\]
  494. * draft-symeonidis-medup-requirements-00:
  495. * Initial version
  496. * draft-symeonidis-medup-requirements-01:
  497. * Split of document
  498. * move threat analysis related sections to new I-D
  499. {{I-D.draft-symeonidis-pearg-private-messaging-threat-analysis}}
  500. * Updated Titel
  501. # Open Issues
  502. \[\[ RFC Editor: This section should be empty and is to be removed
  503. before publication \]\]
  504. * Add references to used materials (in particular threat analyses part)
  505. * Get content from Autocrypt ({{autocrypt}})
  506. * Add more text on Group Messaging requirements
  507. * Decide on whether or not "enterprise requirement" will go to this
  508. document
  509. <!-- LocalWords: utf docname symeonidis medup toc sortrefs symrefs
  510. -->
  511. <!-- LocalWords: vspace blankLines pre Autocrypt autocrypt Unger
  512. -->
  513. <!-- LocalWords: Ermoshina MitM homomorphic IOI Diaz remailers IOIs
  514. -->
  515. <!-- LocalWords: mixnets Linkability Unlinkability unlinkability
  516. -->
  517. <!-- LocalWords: Detectability observatility Unditectability Nana
  518. -->
  519. <!-- LocalWords: undetectability Pfitzmann Subcases subcases
  520. -->
  521. <!-- LocalWords: ungrouped Identifiability Changelog Schumacher
  522. -->
  523. <!-- LocalWords: Karlstetter
  524. -->