1.1 问题描述
英文原文
Pervasiveness of, and reliance on, electronic communication and social media have become widespread. One result is that some people seem willing to share private information (PI) about their personal interactions, relationships, purchases, beliefs, health, and movements, while others hold their privacy in these areas as very important and valuable. There are also significant differences in privacy choices across various domains. For example, some people are quick to give away the protection of their purchasing information for a quick price reduction, but at the same time are unlikely to share information about their disease conditions or health risks. Similarly, some populations or subgroups may be less willing to give up particular types of personal information if they perceive it posing a personal or community risk. The risk may involve loss of safety, money, valuable items, intellectual property (IP), or the person's electronic identity. Other risks include professional embarrassment, loss of a position or job, social loss (friendships), social stigmatization, or marginalization. While a government employee who has voiced political dissent against the government might be willing to pay to keep their social media data private, a young college student may feel no pressure to restrict their posting of political opinion or social information. It seems that individual choices on PI protection and internet and system security in cyber space can create risks and rewards in elements of freedom, privacy, convenience, social standing, financial benefits, and medical treatment.
Is private information (PI) similar to private personal property (PP) and intellectual property (IP)? Once lawfully obtained, can PI be sold or given to others who then have the right or ownership of the information? As detailed information and meta-data of human activity becomes more and more valuable to society, specifically in the areas of medical research, disease spread, disaster relief, businesses (e.g. marketing, insurance, and income), records of personal behaviors, statements of beliefs, and physical movement, these data and detailed information may become a valuable and quantifiable commodity. Trading in one's own private data comes with a set of risks and benefits that may differ by the domain of information (e.g. purchasing, social media, medical) and by subgroup (e.g. citizenship, professional profile, age).
Can we quantify the cost of privacy of electronic communications and transactions across society? That is, what is the monetary value of keeping PI protected, or how much would it cost for others to have or use PI? Should the government regulate this information or is it better left to privacy industry or the individual? Are these information and privacy issues merely personal decisions that individuals must evaluate to make their own choices and provide their own protection?
There are several things to consider when evaluating the cost of privacy. First, is data sharing a public good? For example, Center for Disease Control may use the data to trace the spread of disease in order to prevent further outbreak. Other examples include managing at risk populations, such as children under 16, people at risk of suicide, and the elderly. Moreover, consider groups of extremists who seek to hide their activities. Should their data be trackable by the government for national security concerns? Consider a person's browser, phone system, and internet feed with their personalized advertisements; how much is this customization worth?
Overall, when evaluating cost of privacy we need to consider all of these tradeoffs. What is the potential gain from keeping data private and what is lost by doing so?
As a policy analysis team for a national decision maker, your team's tasks are:
Task 1: Develop a price point for protecting one's privacy and PI in various applications. To evaluate this, you may want to categorize individuals into subgroups with reasonably similar levels of risk or into related domains of the data. What are the set of parameters and measures that would need to be considered to accurately model risk to account for both 1) characteristics of the individuals, and 2) characteristics of the specific domain of information?
Task 2: Given the set of parameters and measures from Task 1, model for cost of privacy across at least three domains (social media, financial transactions, and health/medical records). In your base model consider how the tradeoffs and risks of keeping data protected affect your model. You may consider giving some of the tradeoffs and risks more weight than others as well as stratifying weights by subgroup or category. Consider how different basic elements of the data (e.g. name, date of birth, gender, social security or citizenship number) contribute to your model. Are some of these elements worth more than others? For example, what is the value of a name alone compared with value of a name with the person's picture attached? Your model should design a pricing structure for PI.
Task 3: Not long ago, people had no knowledge about which agencies had purchased their PI, how much their PI was worth, or how PI was being used. New proposals are being put forth which would turn PI into a commodity. With the pricing structure you generated in Task 2, establish a pricing system for individuals, groups, and entire nations. With data becoming a commodity subject to market fluctuations, is it appropriate to consider forces of supply and demand for PI? Assuming people have control to sell to their own data, how does this change the model?
Task 4: What are the assumptions and constraints of your model? Assumptions and constraints should address issues such as government regulations (e.g. price regulations, specific data protections such as certain records that may not be subject to the economic system) and cultural and political issues. Based on your model and the political and cultural issues, consider if information privacy should be made a basic human right when thinking about policy recommendations. Consider introducing a dynamic element to your model by introducing the variations over time in human decision-making given changing personal beliefs about the worth of their own data (e.g. personal data such as name, address, picture), transaction data (e.g. on-line purchases, search history), and social media data (e.g. posts, pictures).
Task 5: Are there generational differences in perceptions of the risk-to-benefit ratio of PI and data privacy? As generations age, how does this change the model? How is PI different or similar to PP and IP?
Task 6: What are the ways to account for the fact that human data is highly linked and often each individual's behaviors are highly correlated with others? Data on one person can provide information about others whom they are socially, professionally, economically, or demographically connected. Therefore, personal decisions to share one's own data can affect countless others. Are there good ways to capture the network effects of data sharing? Does that effect the price system for individuals, subgroups, and entire communities and nations? If communities have shared privacy risks, is it the responsibility of the communities to protect citizens' PI?
Task 7: Consider the effects of a massive data breach where millions of people's PI are stolen and sold on the dark web, sold as part of an identity theft ring, or used as ransom. How does such a PI loss or cascade event impact your model? Now that you have a pricing system that quantifies the value of data per individual or loss type, are agencies that are to blame for the data breach responsible to pay individuals directly for misuse or loss of PI?
Task 8: Write a two-page policy memo to the decision maker on the utility, results, and recommendations based your policy modeling on this issue. Be sure to specify what types of PI are included in your recommendations.
Your submission should consist of:
- One-page Summary Sheet,Two-page memo,
- Your solution of no more than 20 pages, for a maximum of 23 pages with your summary and memo.
- Note: Reference list and any appendices do not count toward the 23-page limit and should appear after your completed solution.
中文翻译
电子通信和社交媒体的普及和依赖性日益广泛。其结果之一是一些人似乎愿意分享有关他们的个人互动、关系、购买、信仰、健康和行动的私人信息(PI),而其他人则将这些领域的隐私视为非常重要和有价值的。在各个领域,隐私选择也存在显著差异。例如,有些人迅速放弃购买信息的保护以获得价格优惠,但同时不太可能分享有关他们的疾病状况或健康风险的信息。同样,如果某些人群或亚群体认为共享特定类型的个人信息可能对个人或社区构成风险,他们可能不太愿意放弃这些信息。这种风险可能涉及安全丧失、金钱丧失、贵重物品丧失、知识产权(IP)丧失或个人电子身份丧失。其他风险包括职业尴尬、失去职位或工作、社交损失(友谊)、社会污名化或边缘化。例如,一位曾对政府表达政治异议的政府雇员可能愿意支付费用来保护他们的社交媒体数据,而一位年轻的大学生可能感到无需限制其发布政治观点或社交信息。个人在PI保护以及网络空间中的互联网和系统安全方面的选择似乎会在自由、隐私、便利性、社会地位、经济利益和医疗治疗等方面产生风险和回报。
私人信息(PI)是否类似于私人个人财产(PP)和知识产权(IP)?在合法获取后,PI是否可以出售或赠送给其他人,然后这些人就有了该信息的权利或所有权?随着人类活动的详细信息和元数据在社会中变得越来越有价值,特别是在医学研究、疾病传播、灾害救援、企业(例如市场营销、保险和收入)以及个人行为记录、信仰陈述和身体活动方面,这些数据和详细信息可能会成为有价值且可量化的商品。交易个人私有数据涉及一系列可能因信息领域(例如购买、社交媒体、医学)和亚群体(例如公民身份、职业概况、年龄)而异的风险和利益。
我们是否能够量化整个社会对电子通信和交易隐私的成本?也就是说,保护PI的货币价值是多少,或者其他人拥有或使用PI将花费多少?政府应该对这些信息进行监管,还是更好地交给隐私行业或个人处理?这些信息和隐私问题是否仅仅是个人决策,个体必须评估以做出自己的选择并提供自己的保护?
在评估隐私成本时,有几个要考虑的因素。首先,数据共享是否是公共利益?例如,疾病控制中心可以使用数据追踪疾病的传播,以防止进一步爆发。其他例子包括管理风险人群,如16岁以下的儿童、有自杀风险的人和老年人。此外,考虑寻求隐藏活动的极端分子群体。他们的数据是否应该为了国家安全而可追踪?再考虑一个人的浏览器、电话系统和互联网订阅以及他们个性化的广告;这种定制值多少钱?
总的来说,在评估隐私成本时,我们需要考虑所有这些权衡。保持数据私有的潜在收益是什么,保持私有会失去什么?
作为国家决策者的政策分析团队,你的团队任务如下:
任务1:在各种应用中制定保护个人隐私和PI的价格点。为了评估这一点,你可能需要将个体分类为风险相对较为相似的亚群体或相关数据领域。需要考虑哪些参数和措施来准确建模风险,以考虑个体的特征和特定信息领域的特征?
任务2:根据任务1中的参数和措施,模拟至少三个领域(社交媒体、金融交易和健康/医疗记录)的隐私成本。在基本模型中,考虑数据保护的权衡和风险如何影响你的模型。你可以考虑给予某些权衡和风险更多的权重,同时也可以按亚群体或类别进行分层权重。考虑数据的不同基本元素(例如姓名、出生日期、性别、社会安全号码或公民身份号码)对你的模型的贡献。其中一些元素是否比其他元素更有价值?例如,仅有姓名与附带个人照片的姓名相比,其价值如何?
任务3:不久前,人们对购买其PI的机构、其PI价值有多少,以及PI是如何被使用的一无所知。目前正在提出新的建议,将PI转化为商品。利用你在任务2中生成的定价结构,为个体、群体和整个国家建立一个定价系统。随着数据成为市场波动的商品,考虑是否适当考虑PI的供求关系?假设人们有权控制出售自己的数据,这将如何改变模型?
任务4:你的模型的假设和约束是什么?这些假设和约束应该涉及政府法规(例如价格规定、特定数据保护,如某些记录可能不受经济体系约束)以及文化和政治问题。根据你的模型和政治、文化问题,考虑在制定政策建议时,是否应将信息隐私视为基本人权。考虑通过引入时间变化中人类决策的变化,从而为模型引入动态元素,考虑个人数据(例如姓名、地址、照片)的个人信念变化,交易数据(例如在线购买、搜索历史)和社交媒体数据(例如帖子、照片)的价值。
任务5:在个人隐私和数据隐私的风险与利益比例的看法中是否存在代沟?随着各代人的年龄增长,这如何改变模型?PI与PP和IP有何异同?
任务6:如何考虑人类数据高度关联且每个人的行为通常与其他人的行为高度相关的事实?关于一个人的数据可以提供有关与他们在社交、职业、经济或人口上有联系的其他人的信息。因此,个人决定分享自己的数据可能会影响无数其他人。有没有好的方法来捕捉数据共享的网络效应?这是否影响个体、亚群体和整个社区和国家的价格体系?如果社区面临共享的隐私风险,社区是否有责任保护公民的PI?
任务7:考虑大规模数据泄露的影响,数百万人的PI被盗取并在暗网上出售,作为身份盗窃链的一部分被出售,或被用作勒索。这种PI损失或级联事件如何影响你的模型?现在你有了一个能够量化每个个体或损失类型的数据价值的定价系统,负责数据泄露的机构是否有责任直接为PI的滥用或丢失支付个体?
任务8:向决策者撰写一份两页的政策备忘录,介绍你在这个问题上的政策建模的效用、结果和建议。确保指定在你的建议中包含哪些类型的PI。
您的提交应包括:
- 一页摘要表格,
- 两页备忘录,
- 你的解决方案不超过20页,总共不超过23页,包括摘要和备忘录。
注意:参考文献列表和任何附录不计入23页的限制,并应出现在您的解决方案之后。