扎克伯格的20K字道德谈话 – TechCrunch的亮点和成绩单


马克扎克伯格说 Facebook可能会让人们为看不到广告而付费,但向用户收取额外的隐私控制权是不合适的。这只是首席执行官在第一次公开会谈中分享的令人着迷的哲学观点之一,他承诺将成为他2019年个人挑战的一部分。

在哈佛大学法学院和计算机科学教授Jonathan Zittrain谈到他退学的大学校园时,扎克伯格设法逃脱了100分钟的谈话,只有几个失言。有一次,他说“我们绝对不希望一个社会,每个人的起居室里都有一台摄像机,观看那些谈话的内容”。 Zittrain迅速提醒他,这正是Facebook 门户网站是,而扎克伯格试图通过说门户网站的录音将被加密来转移。

后来扎克伯格提到“广告,在许多地方甚至与人们能够看到的质量方面的有机内容有所不同”,这是对个人照片和状态更新的非常悲伤和嘲弄的评估分享。当他提出众多的事实核查时,Zittrain表示,这可能成为“天体冲浪”的途径,用户群提供有目的的偏见信息以促进他们的利益,就像政治团体支持他们的对手的事实是谎言的投票。虽然有时避免在问题上采取强硬立场,但扎克伯格在其他方面却相对合乎逻辑且连贯。

政策与政府合作

首席执行官谈到了他的边缘内容政策,该政策悄然降低了接近打破裸体,仇恨言论等政策的帖子,否则这些政策是最耸人听闻的,获得最多分配,但不会让人感觉良好。扎克伯格在这里指出了一些进展,他说:“去年我们所做的很多事情都集中在这个问题上,它确实提高了服务的质量,人们也很欣赏这一点。”

这与扎克伯格一样,考虑到Facebook作为“数据信托”的角色,而不是必须屈服于用户的冲动或优先考虑其短期股价,该公司试图做出符合其社区最佳长远利益的事情。 “这里有一个很难的平衡 – 我的意思是,如果你在谈论人们想要的东西与他们想要的东西 – 你知道的话,人们通常会对他们实际所做的事情表现出的偏好表现出他们想要什么而不是他们想要什么。认为他们想要“他说。从本质上讲,人们可能会点击clickbait,即使它不能让他们感觉良好。

在与政府合作时,扎克伯格解释了激励措施并不总是一致的,比如当执法部门监控某人不小心丢失有关其犯罪和合作者的线索时。政府和社会可能会受益于持续的监控,但Facebook可能希望在发现账户后立即暂停账户。 “但是当你建立起关系和信任时,你可以得到那种他们也能为你举旗的关系,'嘿,这就是我们所处的',”暗示Facebook可能会故意让那个人留下来指责自己协助当局。

但是,政府之间的分歧可能会爆发,扎克伯格指出,“我们已经让员工被投入监狱,因为我们已经获得法院命令,我们必须转换我们无论如何都不会的数据,但我们不能,因为它是加密的。”这可能是对2016年Facebook的拉丁美洲地区副总裁Diego Dzodan因WhatsApp的加密被逮捕而无法为该案件提供证据的逮捕。

分散Facebook

加密和分散的权衡是一个中心主题。他讨论了虽然许多人担心加密会如何掩盖非法或冒犯活动,但Facebook并不一定要查看某人的实际内容来确定他们是否违反了政策。 “其中一个 – 我想,对我来说有点令人惊讶 – 过去几年从事内容治理和执法工作的调查结果是,通过模式识别虚假账户和上游坏账户通常会更有效活动而不是看内容“扎克伯格说。

随着Facebook迅速建立区块链团队,可能为无偿支付或分散应用程序的身份层启动加密货币,Zittrain询问是否有可能让用户控制他们提供他们的个人资料信息的其他应用程序,而无需Facebook作为中间人。

圣何塞,加利福尼亚州 – 5月01日:Facebook首席执行官马克扎克伯格(Justin Sullivan / Getty Images拍摄)

扎克伯格强调,在Facebook的规模上,转向效率较低的分布式架构将极其“计算密集”,尽管最终可能实现。相反,他说:“我一直在考虑的事情之一就是使用我可能感兴趣的区块链 – 尽管我还没有找到一种方法来解决这个问题,是围绕认证和带来的 – 基本上授予访问您的信息和不同服务的权限。所以,基本上,用完全分发的东西取代我们对Facebook Connect的概念。“对于那些知道Facebook无法从用户中删除它们的开发者来说,这可能会很有吸引力。

问题在于,如果开发人员滥用用户,扎克伯格担心“在完全分布式系统中,没有人可以切断开发人员的访问权限。所以,问题是如果你有一个完全分布式的系统,它一方面可以极大地增强个人的能力,但它确实提高了风险,它可以解决你的问题,那么,同意的界限是什么以及人们如何真正实际有效地知道他们同意一个机构?“

没有“支付隐私”

但也许最新颖和最紧迫的是扎克伯格对Facebook应该让人们付钱去除广告的次要问题的评论。 “你开始进入一个原则性的问题,即'我们是否会让人们付费以对数据使用进行不同的控制而不是其他人?'而我的回答是难以理解的。”Facebook承诺永远运营免费版本所以每个人都可以发声。然而,包括我在内的一些人建议,对Facebook进行高级无广告订阅可以帮助我们最大限度地减少数据收集和参与度,尽管它可能会通过将最富裕和期望的用户从广告定位池中拉出来打破Facebook的收入机器。

“我所说的是关于数据的使用,我不认为这是人们应该购买的东西。我认为我们需要统一的数据原则可供所有人使用。这对我来说是一个非常重要的原则“扎克伯格扩大了。 “就像,也许你可以谈谈你是否应该支付而不是看广告。这对我来说不是一个道德问题。 但是,你是否可以通过不同的隐私控制付费的问题是错误的。“

早在5月份,扎克伯格宣布Facebook将在2018年建立一个清除历史按钮,删除社交网络收集的有关您的所有网络浏览数据,但该数据与公司系统的深度集成推迟了发布。研究表明,用户不希望登出所有Facebook Connected服务的不便,但他们想隐藏公司的某些数据。

“我认为,明确的历史是一个先决条件,因为它能够做任何像订阅这样的事情。因为,部分是某人想要做的事情,如果他们真的要支付一个不支持广告的版本,而他们的数据没有在这样的系统中使用,你会希望拥有一个控件,以便Facebook没有无法访问或未使用该数据或将其与您的帐户相关联。作为一个原则性的问题,我们不会仅仅向付钱的人提供这样的控制。“

在扎克伯格最近所做的所有道歉,承诺和预测中,这一承诺可能会灌输最大的信心。虽然有些人可能会认为扎克伯格是一个吸收和利用我们尽可能多的个人信息的数据暴君,但至少他不愿意跨越这些线路。 Facebook可能会试图向您收取隐私费,但事实并非如此。鉴于Facebook在社交网络和消息传递方面的主导地位以及扎克伯格对该公司的投票控制权,一个贪婪的人可能会让互联网变得更糟。

转录 – 马克·扎克伯格在哈佛大学/第一次个人挑战2019年

Jonathan Zittrain: 很好。所以,谢谢你,Mark,来自Techtopia项目和哈佛大学法学院的“互联网与社会”课程,与我和我们的学生们进行交流。我们很高兴有机会谈论任何问题,我们应该直接进入。所以,隐私,自治和信息受托人。

马克·扎克伯格: 行!

Jonathan Zittrain: 喜欢谈论这个。

马克·扎克伯格: 是啊!我在“纽约时报”上看过你的作品。

Jonathan Zittrain: 有标题的人说,“马克扎克伯格可以解决这个烂摊子”吗?

马克·扎克伯格: 是啊。

Jonathan Zittrain: 是啊。

马克·扎克伯格: 虽然那是去年。

Jonathan Zittrain: 确实如此!你在暗示这一切都是固定的吗?

马克·扎克伯格: 不,不。

Jonathan Zittrain: 好的。所以-

Jonathan Zittrain: 我建议我很好奇你是否仍然认为我们可以解决这个烂摊子?

Jonathan Zittrain: 啊!

Jonathan Zittrain: 我希望-

Jonathan Zittrain: “希望永不止息”-

马克·扎克伯格: 是的,你走了。

Jonathan Zittrain: – 是我的座右铭。所以,好吧,让我快速描述一下这个想法,即造币和脚手架来自耶鲁大学的同事杰克巴尔金。我们两个人一直在进一步发展它。有一些标准数量的隐私问题,您可能会对这些问题有所了解,与传达他们知道他们正在传达的信息或他们不确定信息的人有关,但是我们以前称之为“鼠标粪便”当他们在互联网的椽子上奔跑并留下痕迹时。然后谈论的标准方式是你要确保那些东西不会去你不希望它去的地方。我们称之为“信息隐私”。我们不希望人们知道我们想要的东西,也许我们的朋友只知道。在像Facebook这样的地方,你应该能够调整你的设置,然后说:“给他们这个,而不是那个。”但也有一些方法,我们共享的东西仍然可以被使用反对我们,感觉就像“嗯,你同意了”,可能不会结束讨论。我的同事杰克带来的类比是一个医生和一个病人或一个律师和一个客户 – 或者有时在美国,但并非总是 – 一个财务顾问和客户说这些专业人士有一定的专业知识,他们得到客户和患者的各种敏感信息的信任,因此,即使他们自己的利益发生冲突,他们也有责任为这些客户的利益行事。而且,也许只是一个快速的让我们开始。我在2014年写了一篇文章,也许你读过,这是一个关于选举的假设,它说:“假设Facebook有一个关于哪个候选人应该获胜的观点,并且他们提醒那些可能投票给有利于候选人的人。这是选举日,“而对其他人来说,他们只是送了一张猫照片。那会错吗?而且我发现 – 我不知道它是否违法;这对我来说似乎是错误的,可能是信托方法捕捉​​到了错误的原因。

马克·扎克伯格: 行。所以,我想我们可能花一整个时间来讨论这个问题!

马克·扎克伯格: 所以,我读了你的专栏文章,我还阅读了Balkin关于信息受托人的博文。而且我也和他谈过了。

Jonathan Zittrain: 大。

马克·扎克伯格: 而且 – 乍一看,通过这种方式阅读,我的反应是有很多有意义的。对?我们与使用我们服务的人建立信托关系的想法有点直观 – 我们如何思考我们如何构建我们正在构建的东西。所以,通过阅读这一点,就好了,你知道,很多人似乎都有这样的错误观念:当我们整理新闻提要并进行排名时,我们会有一群人专注于最大化人们花费的时间,但这不是我们给他们的目标。我们告诉团队成员,“生产服务” – 我们认为这将是最高质量的 – 我们试图让人们进入并告诉我们,我们可以做的内容可能会显示将要发生的事情 – 他们告诉我们他们想要看到什么,然后我们构建那种能够预测并建立该服务的模型。

Jonathan Zittrain: 顺便说一句,总是如此 – 或者 –

马克·扎克伯格: 没有。

Jonathan Zittrain: – 那是你通过一些课程调整得到的地方吗?

马克·扎克伯格: 通过课程调整。我的意思是,你开始使用更简单的信号,例如人们在Feed中点击的内容,但是你很快就会学到,“嘿,这会让你达到最佳状态”,对吧?如果你专注于人们点击的内容并预测人们点击的内容,那么你选择点击诱饵。对?所以,很快你就会从真实的人们的真实反馈中意识到,这实际上并不是人们想要的。你不打算通过这样做来建立最好的服务。所以,你引进了人们并且实际上有这些面板 – 我们称之为“实现真理” – 你向人们展示了可以展示给他们的所有候选人,你有人说,“最有意义的是什么?我希望这个系统向我们展示?所以,所有这些都说明了我们自己的自我形象和我们正在做的事情是我们作为受托人并试图为人们建立最好的服务。在那里,我认为最终变得有趣的问题是,谁在法律意义上决定什么,或者在人们的最佳利益方面的政策意识?对?因此,我们每天都会进来思考,“嘿,我们正在构建一项服务,我们正在对新闻进行排名,试图向人们展示最相关的内容,并假设数据支持;一般来说,人们希望我们向他们展示最相关的内容。但是,在某种程度上,您可以问一个问题,即“谁来决定排名新闻或显示相关广告?”或者我们选择处理的任何其他事情实际上符合人们的利益。我们正竭尽全力尝试构建服务 [ph?] 我们认为是最好的。在一天结束时,很多都是基于“人们选择使用它。”对吗?因为,显然,他们从中获得了一些价值。但是,所有这些问题就像你说的那样,你有 – 人们可以在哪里有效地给予同意而不是。

Jonathan Zittrain: 是。

马克·扎克伯格: 所以,我认为这里有很多有趣的问题可以解开你如何实现这样的模型。但是,我认为,在我们运营这家大公司方面,我认为,我认为这是一个很高的层面。在社会中,人们信任社会制度是很重要的。显然,我认为我们现在处于这样一个位置:人们对大型互联网公司,特别是Facebook有很多疑问,我确实认为那里有正确的监管和规则只是提供了一种社会护栏框架,人们可以相信,这些公司在我们都同意的框架内运作。这比他们做任何他们想做的更好。我认为这会给人们信心。因此,我认为,搞清楚这个框架是一件非常重要的事情。而且我相信我们会谈论它,因为它涉及 –

Jonathan Zittrain: 是。

马克·扎克伯格: – 今天的很多内容领域。但是要谈到你是怎么回事 – “谁决定人们最大利益是什么,如果不是人自己呢?”Jonathan Zittrain: 是。

马克·扎克伯格: – 这是一个非常有趣的问题。

Jonathan Zittrain: 是的,所以,我们一定要谈谈这个问题。那么,在我们的议程上是“谁决定?”的问题。

马克·扎克伯格: 行。

Jonathan Zittrain: 其他议程项目包括 – 正如您所说,信托框架对您来说听起来不错 – 医生,患者,Facebook用户。而且我听到你说这就是你想要最终结束的地方。关于人们想要什么以及他们想要什么,有一些有趣的问题。

马克·扎克伯格: 是啊。

Jonathan Zittrain: 人们会说“1月1日,我想要的 – ”新年的决议 – “是健身房会员。”然后在1月2日,他们不想去健身房。他们想要去健身房,但他们从未完全成功。然后,当然,提前一年的薪酬商业模式,他们知道你永远不会出现这种情况。我想一个特定的领域可能会在广告方面深入研究,也许是个性化之间的二分法,它是否会被剥削?现在,可能有东西 – 我知道Facebook,例如,尽可能禁止发薪日贷款。

马克·扎克伯格: 嗯。

Jonathan Zittrain: 这只是一个实质性的领域,它就像是,“好吧,我们不想这样做。”

马克·扎克伯格: 嗯。

Jonathan Zittrain: 但是,当我们考虑良好的个性化,以便Facebook知道我有一只狗,而不是一只猫,然后一个定位者可以给我提供狗粮而不是猫食。如果不是现在,广告平台可以为广告定位者提供一些“我只是失去了我的宠物,我真的很沮丧,我准备做出一些我可能会后悔的决定”的未来日子怎么样?以后,但是当我制作它们时 – “

马克·扎克伯格: 嗯。

Jonathan Zittrain: “ – 我要去制作它们。”所以,这是开球的最佳时机

马克·扎克伯格: 是啊。

Jonathan Zittrain: – 立方氧化锆或其他任何东西 – 。

马克·扎克伯格: 嗯。

Jonathan Zittrain: 在我看来,一个信托方法会说,理想情况下 – 我们如何到达那里我不知道,但理想情况下我们不会允许那种方法使用我们从他们那里收集到的信息知道他们在一个艰难的地方 –

马克·扎克伯格: 是啊。

Jonathan Zittrain: – 然后利用它们。但我不知道。我不知道你会怎么想这样的事情。你能写一个算法来检测那样的东西吗?

马克·扎克伯格: 好吧,我认为其中一个关键原则是我们试图长期经营这家公司。而且我认为人们认为很多事情 – 如果你只是试图优化下一季度的利润或类似的东西,你可能想做一些人们可能会在短期内做的事,但从长远来看会怨恨。但是,如果你真的关心建立社区并实现这一使命并长期建立公司,我认为你比人们通常认为的公司更加一致。之前它回到了这个想法,在那里我认为我们的自我形象在很大程度上就像你所说的那样 – 在这种信托关系中 – 我们可能会经历很多不同的例子。我的意思是,我们不希望向人们展示他们将要点击和参与的内容,但后来感觉他们浪费了他们的时间。在那里我们不想向他们展示他们将基于此做出决定然后后悔的事情。我的意思是,这里有一个很难的平衡 – 我的意思是,如果你在谈论人们想要的东西和他们想要的东西 – 你知道,通常人们对他们实际所做的事情表现出的偏好表现出他们想要的更深层次的感觉。他们认为他们想要什么。因此,我认为在某些东西是剥削性的东西与什么是真实的东西之间存在一个问题,但不是你想要的东西。

Jonathan Zittrain: 是。

马克·扎克伯格: 这是一件非常难以接受的事情。

Jonathan Zittrain: 是。

马克·扎克伯格: 但在很多这些案例中,我运营公司的经验是,你开始建立一个系统,你有相对简单的信号开始,你随着时间的推移建立越来越复杂的模型,试图考虑更多的人们关心的关于。我们可以通过所有这些例子来完成。我认为新闻源和广告可能是两个最复杂的排名榜样 –

Jonathan Zittrain: 是。

马克·扎克伯格: – 我们有。但是,就像我们在一秒钟之前谈论的那样,当我们开始使用这些系统时,我的意思是,只是从新闻源开始 – 但你也可以在广告上做到这一点 – 你知道,最天真的信号,对,是什么人们点击或者“喜欢”的人。但是你很快意识到这不是近似的东西,但它是人们真正关心的基本事实的粗略近似。所以,你真正想要的是尽可能让真实的人看到真正的内容候选人,并以多维的方式告诉你什么对他们很重要,并尝试建立模拟它的系统。然后你想要防止下行保守。那么,你的发薪日贷款的例子 – 当我们过去谈到这个时,你 – 你已经向我提出“你怎么知道发薪日贷款什么时候会被剥削?”的问题对吧? “如果你的目标是处境不好的人?”我们的回答是,“好吧,我们真的不知道什么时候会被剥削,但我们认为整个类别可能存在巨大的风险,所以我们只是禁止它 –

Jonathan Zittrain: 对。这使它成为一个简单的案例。

马克·扎克伯格: 是。而且我认为更难的情况是当存在显着的上升空间和显着的下行空间并且你想要权衡它们时。所以,我的意思是,例如,一旦我们开始在防止选举干预方面做出非常大的努力,最初提出的一个想法就是“为什么我们不禁止所有与政治相关的广告?”他们很快就会进入,好吧,那是什么政治广告?经典的法律定义是围绕选举和候选人的事情,但实际上并不是俄罗斯和其他人主要做的事情。对?它 – 你知道,我们看到的很多问题都围绕着问题广告,对于什么是社会问题,基本上是缝纫部门。所以,好吧,我认为你不会妨碍人们的言论和能力,以促进和倡导他们关心的问题。那么,问题是“好吧,好吧,那么,什么是正确的平衡?”你如何确保提供适当级别的控制,那些不应该参与这些控制的人辩论不是或至少你提供正确的透明度。但我认为我们从原来的问题中略微转了一下Jonathan Zittrain: 是。

马克·扎克伯格: 但是 – 但是,是的。那么,让我们回到原来的位置

Jonathan Zittrain: 好吧,这是 – 这是一种可能推动它前进的方式,这就是:像Facebook这样完整的平台提供了很多机会来塑造人们看到的东西并可能帮助他们进行那些推动,现在是时候走了到健身房或避免他们陷入发薪日贷款的掠夺。这是一个问题,只要有平台去做,现在是否有道德义务去做,帮助人们实现美好生活?

马克·扎克伯格: 嗯。

Jonathan Zittrain: 而且我担心任何一家公司都必须承担这样的负担,比如说,如果不是每一个人中最完美,最合理的新闻报道,那么这是多么重要? 25亿活跃用户?这样的事情。

马克·扎克伯格: 是啊。在那个顺序上。

Jonathan Zittrain: 一直以来可能会有一些方法开始一点点进入工程设计的事情会说,“好吧,事后看来,有没有办法设计这个,以便赌注不是那么高,不是“专注于公正,”天哪,Facebook是这么做的吗?“好像全世界只有一份报纸或者一两份报纸,就像是,”那么,那么“纽约时报”选择放什么在它的主页上,如果它是唯一的报纸,那将具有特别重要性。“

马克·扎克伯格: 嗯。

Jonathan Zittrain: 因此,作为一个技术问题,这个会议室的一些学生有机会听到万维网发明者蒂姆·伯纳斯 – 李(Tim Berners-Lee)的观点,他对“固体”这个东西有了新的想法。我不知道你是否听说过Solid。它是一种协议,而不是一种产品。所以,今天没有车可以离开。但它的想法是允许人们拥有他们在网络上运行时产生的数据,最终在他们自己的数据锁定器中。现在,对于像蒂姆这样的人来说,它可能意味着在桌子下面的储物柜中,他可以在半夜醒来,看看他的数据在哪里。对于其他人来说,它可能意味着某个地方的伊拉克,可能是由一个正在寻找他们的受托人守卫,我们把钱存入银行的方式,然后我们可以在晚上睡觉知道银行家们 – 这可能不是2019年最好的比喻,但看着。

马克·扎克伯格: 我们会到达那里。

Jonathan Zittrain: 我们会到达那里。但Solid说,如果你这样做,那么人们会 – 或者他们有用的代理人 – 能够说,“好吧,Facebook即将到来。它需要我的以下数据,并包括我在使用它时产生的关于我的数据,但是存储回我的储物柜中,每次都需要回到我的井里抽水。这样,如果我想切换到Schmacebook或其他东西,它仍然在我的井中,我可以立即授予Schmacebook的许可,看看它,我不必做一种数据啜食然后重新上传它。它是一种完全分布式的数据思考方式。而且从工程角度来看,我很好奇,这似乎可以解决Facebook所拥有的大小和纺车轮数量的问题。

马克·扎克伯格: Yeah-

Jonathan Zittrain: – 我很好奇你对这样一个想法的反应。

马克·扎克伯格: 所以,我认为这很有趣。当然,Facebook正在进行的计算水平以及我们正在构建的所有服务都是以分布式方式进行的。我的意思是,我认为作为一个基本模型,我认为我们将在未来五年内构建数据中心容量以及我们认为我们需要做的计划,我们认为这些都是AWS和Google的所有顺序云正在为支持所有客户而努力。所以,好吧,这就像是一个相对计算密集的事情。

随着时间的推移,你会假设你会得到更多的计算。因此,计算效率较低的分散事物将难以对付,它们更难以进行计算,但最终可能有计算资源来执行此操作。我认为更有趣的问题在短期内不可行,但是像这样的系统的良好性的哲学问题。

所以,有一个问题,如果你愿意 – 我们可以进入分散化,我一直在考虑的事情之一是使用我可能感兴趣的区块链 – 尽管我还没想到实现这一目标的方法是围绕身份验证和提供 – 并基本上授予对您的信息和不同服务的访问权限。所以,基本上,用完全分布的东西取代我们对Facebook Connect的概念。

Jonathan Zittrain: “你想用你的Facebook帐户登录吗?”是现状

马克·扎克伯格: 基本上,你把你的信息,你存储在一些分散的系统,你可以选择是否登录到不同的地方而你不是通过中间人,这有点像你在这里建议的 –

Jonathan Zittrain: 是。

马克·扎克伯格: – 从某种意义上说。好的,现在,我认为有很多东西对我来说很有吸引力。您知道,对于开发人员而言,使用我们的系统或Google的系统或通过Apple的应用程序商店提供服务真正令人不安的事情之一就是您不希望在为您提供服务之间设置中介 – 使用您服务的人和您,对,有人可以说,“嘿,我们作为开发人员必须遵守您的政策,如果我们不这样做,那么您可以切断对我们服务的人的访问权限“这是一个困难和令人不安的立场。我认为开发人员 –

Jonathan Zittrain: – 你指的是最近发生的事件。

马克·扎克伯格: 不,好吧,我当时很好

马克·扎克伯格: 但我认为它强调了 – 我认为每个开发人员都可能会这样:人们正在使用任何应用程序商店,但也登录Facebook,与谷歌;任何这些服务,您都希望与您所服务的人员建立直接关系。

Jonathan Zittrain: 是。

马克·扎克伯格: 现在,好的,但让我们来看看另一面。因此,我们在过去几年中看到的剑桥Analytica基本上是一个例子,人们选择获取数据,他们 – 其中一些是他们的数据,其中一些是他们从朋友那里看到的数据,对吧?因为如果你想做一些事情,比如制作它,那么替代服务可以建立一个竞争的新闻源,那么你需要能够做到这一点,以便人们可以带来他们看到你的数据 [ph?] 在系统内。好吧,基本上,人们选择将他们的数据提供给与剑桥大学有关联的开发人员,剑桥大学是一个非常受尊敬的机构,然后开发人员转而将数据出售给Cambridge Analytica公司,这违反了我们的政策。因此,我们切断了开发人员的访问权限。当然,在完全分布式系统中,没有人可以切断开发人员的访问权限。 So, the question is if you have a fully distributed system, it dramatically empowers individuals on the one hand, but it really raises the stakes and it gets to your questions around, well, what are the boundaries on consent and how people can really actually effectively know that they’re giving consent to an institution?

In some ways it’s a lot easier to regulate and hold accountable large companies like Facebook or Google, because they’re more visible, they’re more transparent than the long tail of services that people would chose to then go interact with directly. So, I think that this is a really interesting social question. To some degree I think this idea of going in the direction of blockchain authentication is less gated on the technology and capacity to do that. I think if you were doing fully decentralized Facebook, that would take massive computation, but I’m sure we could do fully decentralized authentication if we wanted to. I think the real question is do you really want that?

Jonathan Zittrain: 是。

Mark Zuckerberg: 对? And I think you’d have more cases where, yes, people would be able to not have an intermediary, but you’d also have more cases of abuse and the recourse would be much harder.

Jonathan Zittrain: 是。 What I hear you saying is people as they go about their business online are generating data about themselves that’s quite valuable, if not to themselves, to others who might interact with them. And the more they are empowered, possibly through a distributed system, to decide where that data goes, with whom they want to share it, the more they could be exposed to exploitation. this is a genuine dilemma–

Mark Zuckerberg: Yeah, yeah.

Jonathan Zittrain: –because I’m a huge fan of decentralization.

Mark Zuckerberg: Yeah, yeah.

Jonathan Zittrain: But I also see the problem. And maybe one answer is there’s some data that’s just so toxic there’s no vessel we should put it in; it might eat a whole through it or something, metaphorically speaking. But, then again, innocuous data can so quickly be assembled into something scary. So, I don’t know if the next election–

Mark Zuckerberg: 是啊。 [ph?] I mean, I think in general we’re talking about the large-scale of data being assembled into meaning something different from what the individual data points mean.

Jonathan Zittrain: 是。

Mark Zuckerberg: And I think that’s the whole challenge here. But I philosophically agree with you thatI mean, I want to think about the– like, I do think about the work that we’re doing as a decentralizing force in the world, right? A lot of the reason why I think people of my generation got into technology is because we believe that technology gives individuals power and isn’t massively centralizing. Now you’ve built a bunch of big companies in the process, but I think what has largely happened is that individuals today have more voice, more ability to affiliate with who they want, and stay connected with people, ability to form communities in ways that they couldn’t before, and I think that’s massively empowering to individuals and that’s philosophically kind of the side that I tend to be on. So, that’s why I’m thinking about going back to decentralized or blockchain authentication. That’s why I’m kind of bouncing around how could you potentially make this work, because from my orientation is to try to go in that direction.

Jonathan Zittrain: 是。

Mark Zuckerberg: An example where I think we’re generally a lot closer to going in that direction is encryption. I mean, this is, like, one of the really big debates today is basically what are the boundaries on where you would want a messaging service to be encrypted. And there are all these benefits from a privacy and security perspective, but, on the other hand, if what we’re trying to do– one of the big issues that we’re grappling with content governance and where is the line between free expression and, I suppose, privacy on one side, but safety on the other as people do really bad things, right, some of the time. And I think people rightfully have an expectation of us that we’re going to do everything we can to stop terrorists from recruiting people or people from exploiting children or doing different things. And moving in the direction of making these systems more encrypted certainly reduces some of the signals that we would have access to be able to do some of that really important work.

But here we are, right, we’re sitting in this position where we’re running WhatsApp, which is the largest end-to-end encrypting service in the world; we’re running messenger, which is another one of the largest messaging systems in the world where encryption is an option, but it isn’t the default. I don’t think long term it really makes sense to be running different systems with very different policies on this. I think this is sort of a philosophical question where you want to figure out where you want to be on it. And, so, my question for you– now,

I’ll talk about how I’m thinking about this– is all right, if you were in my position and you got to flip a switch is probably too glib, because there’s a lot of work that goes into this, and go in one direction for both of those services, who would you think about that?

Jonathan Zittrain: Well, the question you’re putting on the table, which is a hard one is “Is it okay,” and let’s just take the simple case, “for two people to communicate with each other in a way that makes it difficult for any third party to casually listen in?” Is that okay? And I think that the way we normally answer that question is kind of a form of what you might call status quo-ism, which is not satisfying. It’s whatever has been the case is—

Mark Zuckerberg: Yeah, yeah.

Jonathan Zittrain: –whatever has been the case is what should stay the case.

Mark Zuckerberg: 是啊。

Jonathan Zittrain: And, so, for WhatsApp, it’s like right now WhatsApp, as I understand it, you could correct me if I’m wrong, is pretty hard to get into if–

Mark Zuckerberg: It’s fully end-to-end encrypted.

Jonathan Zittrain: Right. So, if Facebook gets handed a subpoena or a warrant or something from name-your-favorite-country–

Mark Zuckerberg: 是啊。

Jonathan Zittrain: –and you’re just like, “Thank you for playing. We have nothing to–”

Mark Zuckerberg: Oh, yeah, we’ve had employees thrown in jail because we have gotten court orders that we have to turnover data that we wouldn’t probably anyway, but we can’t because it’s encrypted.

Jonathan Zittrain: 是。 And then, on the other hand, and this is not as clean as it could be in theory, but Messenger is sometimes encrypted, sometimes not. If it doesn’t happen to have been encrypted by the users, then that subpoena could work and, more than that, there could start to be some automated systems either on Facebook’s own initiative or under pressure from governments in the general case, not a specific warrant, to say, “Hey, if the following phrases appear, if there’s some telltale that says, “This is somebody going after a kid for exploitation,” it should be forwarded up. If that’s already happening and we can produce x-number of people who have been identified and a number of crimes averted that way, who wants to be the person to be like, “Lock it down!” Like, “We don’t want any more of that!” But I guess, to put myself now to your question, when I look out over years rather than just weeks or months, the ability to casually peek at any conversation going on between two people or among a small group of people or even to have a machine do it for you, so, you can just set your alert list, you know, crudely speaking, and get stuff back, that– it’s always trite to call something Orwellian, but it makes Orwell look like a piker. I mean, it seems like a classic case where you– the next sentence would be “What could possible go wrong?”

Jonathan Zittrain: And we can fill that in! And it does mean, though, I think that we have to confront the fact that if we choose to allow that kind of communication, then there’s going to be crimes unsolved that could’ve been solved. There’s going to be crimes not prevented that could have been prevented. And the only thing that kind of blunts it a little is it is not really all or nothing. The modern surveillance states of note in the world, have a lot of arrows in their quivers. And just being able to darken you door and demand surveillance of a certain kind, that might be a first thing they would go to, but they’ve got a Plan B, and Plan C, and a Plan D. And I guess it really gets to what’s your threat model? If you think everybody is kind of a threat, think about the battles of copyright 15 years ago. Everybody is a potential infringer. All they have to do is fire up Napster, then you’re wanting some massive technical infrastructure to prevent the bad thing. If what you’re thinking is instead, they are a few really bad apples and they tend to– when they congregate online or otherwise with one another– tend to identify themselves and then we might have to send somebody near their house to listen with a cup at the window, metaphorically speaking. That’s a different threat model and [sic] might not need it.

Mark Zuckerberg: 是啊。

Jonathan Zittrain: Is that getting to an answer to your question?

Mark Zuckerberg: Yeah, and I think I generally agree. I mean, I’ve already said publically that my inclination is to move these services in the direction of being all encrypted, at least the private communication version. I basically think if you want to kind of talk in metaphors, messaging is like people’s living room, right? And I think we– you know, we definitely don’t want a society where there’s a camera in everyone’s living room watching the content of those conversations.

Jonathan Zittrain: Even as we’re now– I mean, it is 2019, people are happily are putting cameras in their living rooms.

Mark Zuckerberg: That’s their choice, but I guess they’re putting cameras in their living rooms, well, for a number of reasons, but–

Jonathan Zittrain: And Facebook has a camera that you can go into your living room- Mark Zuckerberg: That is, I guess–

Jonathan Zittrain: I just want to be clear.

Mark Zuckerberg: Yeah, although that would be encrypted in this world.

Jonathan Zittrain: Encrypted between you and Facebook!

Mark Zuckerberg: 不不不。 I think– but it also–

Jonathan Zittrain: Doesn’t it have like a little Alexa functionality, too?

Mark Zuckerberg: Well, Portal works over Messenger. So, if we go towards encryption on Messenger, then that’ll be fully encrypted, which I think, frankly, is probably what people want.

Jonathan Zittrain: 是啊。

Mark Zuckerberg: The other model, beside the living room is the town square and that, I think, just has different social norms and different policies and norms that should be at play around that. But I do think that these things are very different.对? You’re not going to– you may end up in a world where the town square is a fully decentralized or fully encrypted thing, but it’s not clear what value there is in encrypting something that’s public content anyway, or very broad.

Jonathan Zittrain: But, now, you were put to it pretty hard in that as I understand it there’s now a change to how WhatsApp works, that there’s only five forwards permitted.

Mark Zuckerberg: Yeah, so, this is a really interesting point, right? So, when people talk about how encryption will darken some of the signals that we’ll be able to use, you know, both for potentially providing better services and for preventing harm. One of the– I guess, somewhat surprising to me, findings of the last couple of years of working on content governance and enforcement is that it often is much more effective to identify fake accounts and bad actors upstream of them doing something bad by patterns of activity rather than looking at the content.

Jonathan Zittrain: So-called meta data.

Mark Zuckerberg: Sure.

Jonathan Zittrain: “I don’t know what they’re saying, but here’s who’s they’re calling” kind of thing.

Mark Zuckerberg: Yeah, or just like they– this account doesn’t seem to really act like a person, right?

And I guess as AI gets more advanced and you build these adversarial networks or generalized adversarial networks, you’ll get to a place where you have Ai that can probably more effectively

Jonathan Zittrain: Go under mimic [ph?] cover. Mimic act like another person–

Mark Zuckerberg: –for a while.

Mark Zuckerberg: 是啊。 But, at the same time, you’ll be building up contrary AI on the other side, but is better at identifying AIs that are doing that. But this has certainly been the most effective tactic across a lot of the areas where we’ve needed to focus to preventing harm. You know, the ability to identify fake accounts, which, like, a huge amount of the– under any category of issue that you’re talking about, a lot of the issues downstream come from fake accounts or people who are clearly acting in some malicious or not normal way. You can identify a lot of that without necessarily even looking at the content itself. And if you have to look at a piece of content, then in some cases, you’re already late, because the content exists and the activity has already happened. So, that’s one of the things that makes me feel like encryption for these messaging services is really the right direction to go, because you’re– it’s a very proprivacy and per security move to give people that control and assurance and I’m relatively confident that even though you are losing some tools to– on the finding harmful content side of the ledger, I don’t think at the end of the day that those are going to end up being the most important tools

Jonathan Zittrain: 是。

Mark Zuckerberg: –for finding the most of the–

Jonathan Zittrain: But now connect it up quickly to the five forwards thing.

Mark Zuckerberg: Oh, yeah, sure. So, that gets down to if you’re not operating on a piece of content directly, you need to operate on patterns of behavior in the network. And what we, basically found was there weren’t that many good uses for people forwarding things more than five times except to basically spam or blast stuff off. It was being disproportionately abused. So, you end up thinking about different tactics when you’re not operating on content specifically; you end up thinking about patterns of usage more.

Jonathan Zittrain: Well, spam, I get and that– I’m always in favor of things that reduce spam. However, you could also say the second category was just to spread content. You could have the classic, I don’t know, like Les Mis, or Paul Revere’s ride, or Arab Spring-esque in the romanticized vision of it: “Gosh, this is a way for people to do a tree,” and pass along a message that “you can’t stop the signal,” to use a Joss Whedon reference. You really want to get the word out. This would obviously stop that, too.

Mark Zuckerberg: Yeah, and then I think the question is you’re just weighing whether you want this private communication tool where the vast majority of the use and the reason why it was designed was the vast majority of just one-on-one; there’s a large amount of groups that people communicate into, but it’s a pretty small edge case of people operating this with, like– you have a lot of different groups and you’re trying to organize something and almost hack public content-type or public sharing- type utility into an encrypted space and, again, there I think you start getting into “Is this the living room or is this the town square?” And when people start trying to use tools that are designed for one thing to get around what I think the social norms are for the town square, that’s when I think you probably start to have some issues. This is not– we’re not done addressing these issues. There’s a lot more to think through on this

Jonathan Zittrain: 是啊。

Mark Zuckerberg: –but that’s the general shape of the problem that at least I perceive from the work that we’re doing.

Jonathan Zittrain: Well, without any particular segue, let’s talk about fake news.

Jonathan Zittrain: So, insert your favorite segue here. There’s some choice or at least some decision that gets made to figure out what’s going to be next in my newsfeed when I scroll up a little more.

Mark Zuckerberg: Mm-hm.

Jonathan Zittrain: And in the last conversation bit, we were talking about how much we’re looking at content versus telltales and metadata, things that surround the content.

Mark Zuckerberg: 是啊。

Jonathan Zittrain: For knowing about what that next thing in the newsfeed should be, is it a valid desirable material consideration, do you think, for a platform like Facebook to say is the thing we are about to present true, whatever true means?

Mark Zuckerberg: Well, yes, because, again, getting at trying to serve people, people tell us that they don’t want fake content. Right. I mean, I don’t know anyone who wants fake content. I think the whole issue is, again, who gets to decide. Right. So broadly speaking, I don’t know any individual who would sit there and say, “Yes, please show me things that you know are false and that are fake.” People want good quality content and information. That said, I don’t really think that people want us to be deciding what is true for them and people disagree on what is true. And, like, truth is, I mean, there are different levels of when someone is telling a story, maybe the meta arc is talking about something that is true but the facts that were used in it are wrong in some nuanced way but, like, it speaks to some deeper experience. Well, was that true or not? And do people want that disqualified from to them? I think different people are going to come to different places on this.

Now, so I’ve been very sensitive, which, on, like, we really want to make sure that we’re showing people high quality content and information. We know that people don’t want false information. So we’re building quite advanced systems to be able to– to make sure that we’re emphasizing and showing stuff that is going to be high quality. But the big question is where do you get the signal on what the quality is? So the kind of initial v.1 of this was working with third party fact checkers.

Right, I believe very strongly that people do not want Facebook and that we should not be the arbiters of truth in deciding what is correct for everyone in the society. I think people already generally think that we have too much power in deciding what content is good. I tend to also be concerned about that and we should talk about some of the governance stuff that we’re working on separately to try to make it so that we can bring more independent oversight into that.

Jonathan Zittrain: 是。

Mark Zuckerberg: But let’s put that in a box for now and just say that with those concerns in mind, I’m definitely not looking to try to take on a lot more in terms of also deciding in addition to enforcing all the content policies, also deciding what is true for everyone in the world. Okay, so v.1 of that is we’re going to work with–

Jonathan Zittrain: Truth experts.

Mark Zuckerberg: We’re working with fact checkers.

Jonathan Zittrain: 是啊。

Mark Zuckerberg: And, and they’re experts and basically, there’s like a whole field of how you go and assess certain content. They’re accredited. People can disagree with the leaning of some of these organizations.

Jonathan Zittrain: Who does accredited fact checkers?

Mark Zuckerberg: The Poynter Institute for Journalism.

Jonathan Zittrain: I should apply for my certification.

Mark Zuckerberg: You may.

Jonathan Zittrain: Okay, good.

Mark Zuckerberg: You’d probably get it, but you have to– You’d have to go through the process.

Mark Zuckerberg: The issue there is there aren’t enough of them, right. So there’s a large content. There’s obviously a lot of information is shared every day and there just aren’t a lot of fact checkers. So then the question is okay, that is probably

Jonathan Zittrain: But the portion– You’re saying the food is good, it’s just the portions are small. But the food is good.

Mark Zuckerberg: I think in general, but so you build systems, which is what we’ve done especially leading up to elections where I think are some of the most fraught times around this where people really are aggressively trying to spread misinformation.

Jonathan Zittrain: 是。

Mark Zuckerberg: You build systems that prioritize content that seems like it’s going viral because you want to reduce the prevalence of how widespread the stuff gets, so that way the fact checkers have tools to be able to, like, prioritize what they need to go– what they need to go look at. But it’s still getting to a relatively small percent of the content. So I think the real thing that we want to try to get to over time is more of a crowd sourced model where people, it’s not that people are trusting some sort, some basic set of experts who are accredited but are in some kid of lofty institution somewhere else. It’s like do you trust, yeah, like, if you get enough data points from within the community of people reasonably looking at something and assessing it over time, then the question is can you compound that together into something that is a strong enough signal that we can then use that?

Jonathan Zittrain: Kind of in the old school like a slash-dot moderating system

Mark Zuckerberg: 是啊。

Jonathan Zittrain: With only the worry that if the stakes get high enough, somebody wants to Astroturf that.

Mark Zuckerberg: 是。

Jonathan Zittrain: I’d be–

Mark Zuckerberg: There are a lot of questions here, which is why I’m not sitting here and announcing a new program.

Mark Zuckerberg: But what I’m saying is this is, like,–

Jonathan Zittrain: Yeah,

Mark Zuckerberg: This is the general direction that I think we should be thinking about when we haveand I think that there’s a lot of questions and–

Jonathan Zittrain: 是。

Mark Zuckerberg: And we’d like to run some tests in this area to see whether this can help out. Which would be upholding the principles which are that we want to stop–

Jonathan Zittrain: 是。

Mark Zuckerberg: The spread of misinformation.

Jonathan Zittrain: 是。

Mark Zuckerberg: Knowing that no one wants misinformation. And the other principle, which is that we do not want to be arbiters of truth.

Jonathan Zittrain: Want to be the decider, yes.

Mark Zuckerberg: And I think that that’s the basic– those are the basic contours I think of that, of that problem.

Jonathan Zittrain: So let me run an idea by you that you can process in real time and tell me the eight reasons I have not thought of why this is a terrible idea. And that would be people see something in their Facebook feed. They’re about to share it out because it’s got a kind of outrage factor to it. I think of the classic story from two years ago in The Denver Guardian about “FBI agent suspected in Hilary Clinton email leak implicated in murder-suicide.” I have just uttered fake news.

None of that was true if you clicked through The Denver Guardian. There was just that article. There is Denver Guardian. If you live in Denver, you cannot subscribe. Like, it is unambiguously fake. And it was shared more times than the most shared story during the election season of The Boston Globe. And so

Mark Zuckerberg: So, and this is actually an example, by the way, of where trying to figure out fake accounts is a much simpler solution.

Jonathan Zittrain: 是。

Mark Zuckerberg: Than trying to down–

Jonathan Zittrain: So if newspaper has one article–

Mark Zuckerberg: 是啊。

Jonathan Zittrain: Wait for ten more before you decide they’re a newspaper.

Mark Zuckerberg: 是啊。 Or, you know, I mean, it’s there are any number of systems that you could build to basically detect, “Hey, this is–”

Jonathan Zittrain: A Potemkin.

Mark Zuckerberg: This is a fraudulent thing.

Jonathan Zittrain: 是。

Mark Zuckerberg: And then you can take that down. And again, that ends up being a much less controversial decision because you’re doing it upstream based on the basis of inauthenticity.

Jonathan Zittrain: 是。

Mark Zuckerberg: In a system where people are supposed to be their real and represent that they’re their real selves than downstream, trying to say, “Hey, is this true or false?”

Jonathan Zittrain: I made a mistake in giving you the easy case.

Mark Zuckerberg: Okay.

Jonathan Zittrain: So I should have not used that example.

Mark Zuckerberg: Too simple.

Jonathan Zittrain: You’re right and you knocked that one out of the park and, like, Denver Guardian, come up with more articles and be real and then come back and talk to us.

Jonathan Zittrain: So, here’s the harder case which is something that might be in an outlet that is, you know, viewed as legitimate, has a number of users, et cetera. So you can’t use the metadata as easily.

Imagine if somebody as they shared it out could say, “By the way, I want to follow this. I want to learn a little bit more about this.” They click a button that says that. And I also realized when I talked earlier to somebody at Facebook on this that adding a new button to the homepage is, like, everybody’s first idea

Mark Zuckerberg: Oh, yeah.

Jonathan Zittrain: And it’s–

Mark Zuckerberg: But it’s a reasonable thought experiment, even though it would lead to a very bad UI.

Jonathan Zittrain: Fair enough. I understand this is already–

Mark Zuckerberg: 是啊。

Jonathan Zittrain: In the land of fantasy. So they add the button. They say, “I want to follow up on this.”

If enough people are clicking comparatively on the same thing to say, “I want to learn more about this. If anything else develops, let me know, Facebook,” that, then, if I have my pneumatic tube, it then goes to a convened virtually panel of three librarians. We go to the librarians of the nation and the world at public and private libraries across the land who agree to participate in this program. Maybe we set up a little foundation for it that’s endowed permanently and no long connected to whoever endowed it. And those librarians together discuss the piece and they come back with what they would tell a patron if somebody came up to them and said, “I’m about to cite this in my social studies paper. What do you think?” And librarians, like, live for questions like that.

Mark Zuckerberg: Mm-hmm, yeah.

Jonathan Zittrain: They’re like, “Wow. Let us tell you.” And they have a huge fiduciary notion of patron duty that says, “I may disapprove of you even studying this, whatever, but I’m here to serve you, the user.”

Mark Zuckerberg: 是啊。

Jonathan Zittrain: “And I just think you should know, this is why maybe it’s not such a good source.” And when they come up with that they can send it back and it gets pushed out to everybody who asks for follow-up–

Mark Zuckerberg: 是啊。

Jonathan Zittrain: And they can do with it as they will. And last piece of the puzzle, we have high school students who apprentice as librarian number three for credit.

Jonathan Zittrain: And then they can get graded on how well they participated in this exercise which helps generate a new generation of librarian-themed people who are better off at reading things, so.

Mark Zuckerberg: All right, well, I think you have a side goal here which I haven’t been thinking about on the librarian thing.

Mark Zuckerberg: Which is the evil goal of promoting libraries.

Jonathan Zittrain: Well, it’s

Mark Zuckerberg: No, but I mean, look, I think solving– preventing misinformation or spreading misinformation is hard enough without also trying to develop high school students in a direction.

Jonathan Zittrain: Ah. My colleague Charlies Foote–

Mark Zuckerberg: So, that’s solving a problem with a problem.

Jonathan Zittrain: 行。 Well, anyway, yes.

Mark Zuckerberg: So I actually think I agree with most of what you have in there. It doesn’t need to be a button on the home page, it can be– I mean, it turns out that there’s so many people using these services that even if you get– even if you put something that looks like it’s not super prominent, like, behind the three dots on a given newsfeed story, you have the options, yeah, you’re not– not everyone is going tois going to like something.

Jonathan Zittrain: If 1 out of 1000 do it, you still get 10,000 or 100,000 people, yeah.

Mark Zuckerberg: You get pretty good signal. But I actually think you could do even better, which is, it’s not even clear that you need that signal. I think that that’s super helpful. I think really what matters is looking at stuff that’s getting a lot of distribution. So, you know, I think that there’s kind of this notion, and I’m going back to the encryption conversation, which is all right, if I say something that’s wrong to you in a one-on-one conversation, I mean, does that need to be fact checked? I mean, it’s, yeah, it would be good if you got the most accurate information.

Jonathan Zittrain: I do have a personal librarian to accompany me for most conversations, yes. There you go.

Mark Zuckerberg: Well, you are–

Jonathan Zittrain: Unusual.

Mark Zuckerberg: Yeah, yeah.是。

Mark Zuckerberg: That’s the word I was looking for.

Jonathan Zittrain: I’m not sure I believe you, but yes.

Mark Zuckerberg: It’s– But I think that there’s limited– I don’t think anyone would say that every message that goes back and forth in especially an encrypted messaging service should be

Jonathan Zittrain: Fact checked.

Mark Zuckerberg: Should be fact checked.

Jonathan Zittrain: Correct.

Mark Zuckerberg: So I think the real question is all right, when something starts going viral or getting a lot of distribution, that’s when it becomes most socially important for it to be– have some level of validation or at least that we know where that the community in general thinks that this is a reasonable thing. So it’s actually, while it’s helpful to have the signal of whether people are flagging this as something that we should look at, I actually think increasingly you want to be designing systems that just prevent like alarming or sensational content from going viral in the first place. And making sure that that, that the stuff that is getting wide distribution is doing so because it’s high quality on whatever front you care about. So then, okay–

Jonathan Zittrain: And that quality is still generally from Poynter or some external party that

Mark Zuckerberg: Well, well quality has many dimensions.

Jonathan Zittrain: 是啊。

Mark Zuckerberg: But certainly accuracy is one dimension of it. You also, I mean, you pointed out I think in one of your questions, is this piece of content prone to incite outrage. If you don’t mind, I’ll get to your panel of three things in a second, but as a slight detour on this.

Jonathan Zittrain: 是。

Mark Zuckerberg: One of the findings that has been quite interesting is, you know, there’s this question about whether social media in general increases, basically makes it so that sensationalist content gets the most distribution. And what we’ve found is that, all right, so we’re going to have rules, right, about what content is allowed. And what we found is that generally within whatever rules you set up, as content approaches the line of what is allowed, it often gets more distribution. So if you’ll have some rule on, you know, what– And take a completely different example and our nudity policies. Right. It’s like, okay, you have to define what is unacceptable nudity in some way. As you get as close to that as possible it’s like, all right. Like, this is maybe a photo of someone–

Jonathan Zittrain: The skin to share ratio goes up until it gets banned at which point it goes to zero.

Mark Zuckerberg: 是。 Okay. So that is a bad property of a system, right, that I think you want to generally address. Or you don’t want to design a community where or systems for helping to build a community where things that get as close to the line as what is bad get the most distribution.

Jonathan Zittrain: So long as we have the premise, which in many cases is true, but I could probably try to think of some where it wouldn’t be true, that as you near the line, you are getting worse.

Mark Zuckerberg: That’s a good point. That’s a good point. There’s–

Jonathan Zittrain: You know, there might be humor that’s really edgy.

Mark Zuckerberg: That’s true.

Jonathan Zittrain: And that conveys a message that would be impossible to convey without the edginess, while not still–

Mark Zuckerberg: That is–

Jonathan Zittrain: But, I–

Mark Zuckerberg: That’s true.

Jonathan Zittrain: 是啊。

Mark Zuckerberg: So but then you get the question of what’s the cost benefit of allowing that. And obviously, where you can accurately separate what’s good and bad which you, like in the case of misinformation I’m not sure you could do it fully accurately, but you can try to build systems that approximate that, there’s certainly the issue, which is that, I mean, there is misinformation which leads to massive public harm, right. So if it’s misinformation that is also spreading hate and leading to genocide or public attacks or, it’s like, okay, we’re not going to allow that. Right. That’s coming down. But then generally if you say something that’s wrong, we’re not going to try to block that.

Jonathan Zittrain: 是。

Mark Zuckerberg: We’re just going to try to not show it to people widely because people don’t want content that is wrong. So then the question is as something is approaching the line, how do you assess that? This is a general theme in a lot of the content governance and enforcement work that we’re doing, which is there’s one piece of this which is just making sure that we can as effectively as possible enforce the policies that exist. Then there’s a whole other stream of work, which I called borderline content, which is basically this issue of as content approaches the line of being against the policies, how do you make sure that that isn’t the content that is somehow getting the most distribution? And a lot of the things that we’ve done in the last year were focused on that problem and it really improves the quality of the service and people appreciate that.

Jonathan Zittrain: So this idea would be stuff that you’re kind of letting down easy without banning and letting down easy as it’s going to somehow have a coefficient of friction for sharing that goes up. It’s going to be harder–

Mark Zuckerberg: 是啊。

Jonathan Zittrain: For it to go viral.

Mark Zuckerberg: 是啊。

Jonathan Zittrain: And–

Mark Zuckerberg: So it’s fascinating because it’s just against– Like, you can take almost any category of policy that we have, so I used nudity a second ago. You know, gore and violent imagery.

Jonathan Zittrain: 是。

Mark Zuckerberg: Hate speech.

Jonathan Zittrain: 是啊。

Mark Zuckerberg: Any of these things. I mean, there’s, like, hate speech, there’s content that you would just say is mean or toxic, but that did not violate– But that you would not want to have a society that banned being able to say that thing. But it’s, but you don’t necessarily want that to be the content that is getting the most distribution.

Jonathan Zittrain: So here’s a classic transparency question around exactly that system you described.

And when you described this, I think you did a post around this a few months ago. This was fascinating.

You had graphs in the post depicting this, which was great. How would you feel about sharing back to the person who posted or possibly to everybody who encounters it its coefficient of friction? Would that freak people out? Would it be, like, all right, I– And in fact, they would then probably start conforming their posts, for better or worse,–

Mark Zuckerberg: 是啊。

Jonathan Zittrain: To try to maximize the sharability. But that rating is already somewhere in there by design. Would it be okay to surface it?

Mark Zuckerberg: So, as a principle, I think that that would be good, but I don’t– The way that the systems are designed isn’t that you get a score of how inflammatory or sensationalist a piece of content is. The way that it basically works is you can build classifiers that identify specific types of things. Right.

So we’re going down the list of, like, all right, there’s 20 categories of harmful content that you’re trying to identify. You know, everything from terrorist propaganda on the one hand to self-harm issues to hate speech and election interference. And basically, each of these things while it uses a lot of the same underlying machine learning infrastructure, you’re doing specific work for each of them. So if you go back to the example on Nudity for a second, you know, what you– you’re not necessarily scoring everything on a scale of not at all nude to nude.您[‘rebasicallyenforcingspecificpoliciesSoyouknowyou’resaying“Okayif–”[‘rebasicallyenforcingspecificpoliciesSoyouknowyou’resaying“Okayif–”

Jonathan Zittrain: So by machine learning it would just be give me an estimate of the odds by which if a human looked at it who was employed to enforce policy–

Mark Zuckerberg: Well, basically–

Jonathan Zittrain: Whether it violates the policy.

Mark Zuckerberg: And you have a sense of, okay, this is– So what are the things that are adjacent to the policy, right? So you night say, okay, well, if the person is completely naked, that is something that you can definitely build a classifier to be able to identify with relatively high accuracy. But even if they’re not, you know, then the question is you kind of need to be able to qualitatively describe what are the things that are adjacent to that. So maybe the person is wearing a bathing suit and is in a sexually suggestive position. Right. It’s not like any piece of content you’re going to score from not at all nude to nude. But you kind of have the cases for what you think are adjacent to the issues and, and again, you ground this and qualitatively, people, like, people might click on it, they might engage with it, but at the end, they don’t necessarily feel good about it. And you want to get at when you’re designing these systems not just what people do, but also you want to make sure we factor in, too, like is this the content that people say that they really want to be seeing? Do they–?

Jonathan Zittrain: The constitutional law, there’s a formal kind of definition that’s emerged for the word “prurient.” If something appeals to the prurient interest–

Mark Zuckerberg: Okay.

Jonathan Zittrain: As part of a definition of obscenity, the famous Miller test, which was not a beeroriented test. And part of a prurient interest is basically it excites me and yet it completely disgusts me.

And it sounds like you’re actually converging to the Supreme Court’s vision of prurience with this.

Mark Zuckerberg: 也许。

Jonathan Zittrain: And it might be– Don’t worry, I’m not trying to nail you down on that. But it’s very interesting that machine learning, which you invoked, is both really good, I gather, at something like this.

It’s the kind of thing that’s like just have some people tell me with their expertise, does this come near to violating the policy or not and I’ll just through a Spidey sense start to tell you whether it would.

Mark Zuckerberg: Mm-hmm.

Jonathan Zittrain: Rather than being able to throw out exactly what the factors are. I know the person’s fully clothed, but it still is going to invoke that quality. So all of the benefits of machine learning and all of, of course, all the drawbacks where it classifies something and somebody’s like, “Wait a minute. That was me doing a parody of blah, blah, blah.” That all comes to the fore.

Mark Zuckerberg: Yeah and I mean, when you ask people what they want to see in addition to looking at what they actually engage with, you do get a completely different sense of what people value and you can build systems that approximate that. But going back to your question, I think rather than giving people a score of the friction–

Jonathan Zittrain: 是。

Mark Zuckerberg: I think you can probably give people feedback of, “Hey, this might make people uncomfortable in this way, in this specific way.” And this fits your–

Jonathan Zittrain: It might affect how much it gets– how much it gets shared.

Mark Zuckerberg: 是啊。 And this gets down to a different– There’s a different AI ethics question which I think is really important here, which is designing AI systems to be understandable by people

Jonathan Zittrain: Right.

Mark Zuckerberg: Right and to some degree, you don’t just want it to spit out a score of how offensive or, like, where it scores on any given policy. You want it to be able to map to specific things that might be problematic.

Jonathan Zittrain: 是。

Mark Zuckerberg: And that’s the way that we’re trying to design the systems overall.

Jonathan Zittrain: 是。 Now we have something parked in the box we should take out, which is the external review stuff. But before we do, one other just transparency thing maybe to broach. It basically just occurred to me, I imagine it might be possible to issue me a score of how much I’ve earned for Facebook this year. It could simply say, “This is how much we collected on the basis of you in particular being exposed to an ad.” And I know sometimes people, I guess, might compete to get their numbers up. But I’m just curious, would that be a figure? I’d kind of be curious to know, in part because it might even lay the groundwork of being like, “Look, Mark, I’ll double it. You can have double the money and then don’t show me any ads.” Can we get a car off of that lot today?

Mark Zuckerberg: Okay, well, there’s a lot–

Mark Zuckerberg: There’s a lot in there.

Jonathan Zittrain: It was a quick question.

Mark Zuckerberg: So there’s a question in what you’re saying which is so we built an ad-supported system. Should we have an option for people to pay to not see ads.

Jonathan Zittrain: Right.

Mark Zuckerberg: I think is kind of what you’re saying. I mean, just as the basic primer from first principles on this. You know, we’re building this service. We want to give everyone a voice. We want everyone to be able to connect with who they care about. If you’re trying to build a service for everyone,

Jonathan Zittrain: Got to be free. That’s just

Mark Zuckerberg: If you want them to use it, that’s just going to be the argument. Yes, yes.

Jonathan Zittrain: Okay.行。

Mark Zuckerberg: So then, so this is a kind of a tried and true thing. There are a lot of companies over time that have been ad supported. In general what we find is that if people are going to see ads, they want them to be relevant. They don’t want them to be junk. Right. So then within that you give people control over how their data is used to show them ads. But the vast majority of people say, like, show me the most relevant ads that you can because I get that I have to see ads. This is a free service. So now the question is, all right, so there’s a whole set of questions around that that we could get into, but but then

Jonathan Zittrain: For which we did talk about enough to reopen it, the personalization exploitation.

Mark Zuckerberg: 是啊。

Jonathan Zittrain: Or even just philosophical question. Right now, Uber or Lyft are not funded that way.

We could apply this ad model to Uber or Lyft, “Free rides. Totally free. It’s just every fifth ride takes you to Wendy’s and idles outside the drive through window.”

Jonathan Zittrain: “Totally up to you what you want to do, but you’re going to sit here for a while,” and then you go on your way. I don’t know how– and status quo-ism would probably say people would have a problem with that, but it would give people rides that otherwise wouldn’t get rides.

Mark Zuckerberg: I have not thought about that case in their–

Mark Zuckerberg: In their business, so, so–

Jonathan Zittrain: Well, that’s my patent, damn it, so don’t you steal it.

Mark Zuckerberg: But certainly some services, I think tend themselves better towards being ad supported than others.

Jonathan Zittrain: Okay.

Mark Zuckerberg: Okay and I think generally information-based ones tend to–

Jonathan Zittrain: Than my false imprisonment hypo, I’d– Okay, fair enough.

Mark Zuckerberg: I mean, that seems

Jonathan Zittrain: 是啊。

Mark Zuckerberg: There might be, you know, more– more issues there. But okay, but go to the subscription thing.

Jonathan Zittrain: 是。

Mark Zuckerberg: When people have questions about the ad model on Facebook, I don’t think the questions are just about the ad model, I think they’re about both seeing ads and data use around ads.

And the thing that I think, so when I think about this it’s, I don’t just think you want to let people pay to not see ads because I actually think then the question is the questions are around ads and data use and I don’t think people are going to be that psyched about not seeing ads but then not having different controls over how their data is used. Okay, but now you start getting into a principle question which is are we going to let people pay to have different controls on data use than other people. And my answer to that is a hard no, right. So the prerequisite–

Jonathan Zittrain: What’s an example of data use that isn’t ad-based, just so we know what we’re talking about?

Mark Zuckerberg: That isn’t ad-based?

Jonathan Zittrain: 是啊。

Mark Zuckerberg: Like what do you mean?

Jonathan Zittrain: You were saying, I don’t want to see ads. But you’re saying that’s kind of just the wax on the car. What’s underneath is how the data gets used.

Mark Zuckerberg: So, well, look– Maybe– let me keep going with this explanation and then I think this’ll be clear.

Jonathan Zittrain: Yeah, sure.

Mark Zuckerberg: So one of the things that we’ve been working on is this tool that we call clear history. And the basic idea is it is you can kind of analogize it to a web browser where you can clear your cookies. That’s kind of a normal thing. You know that when you clear your cookies you’re going to get logged out of a bunch of stuff. A bunch of stuff might get more annoying.

Jonathan Zittrain: Which is why my guess is, am I right, probably nobody clears their cookies.

Mark Zuckerberg: I don’t know.

Jonathan Zittrain: They might use incognito mode or something, but.

Mark Zuckerberg: I think– I don’t know. How many of you guys clear your cookies every once in a while, right?

Jonathan Zittrain: This is not a representative group, damn it.

Mark Zuckerberg: Okay. Like, maybe once a year or something I’ll clear my cookies.

Jonathan Zittrain:

Mark Zuckerberg: But no, it’s, I think–

Jonathan Zittrain: Happy New Year.

Mark Zuckerberg: No, over some period of time, all right, but–

Jonathan Zittrain: Yeah, okay.

Mark Zuckerberg: But not necessarily every day. But it’s important that people have that tool even though it might in a local sense make their experience worse.

Jonathan Zittrain: 是。

Mark Zuckerberg: Okay. So that kind of content of what different services, websites and apps send to Facebook that, you know, we use to help measure the ads in effectiveness there, right, so things like, you know, if you’re an app developer and you’re trying to pay for ads to help grow your app, we want to only charge you when we actually, when something that we show leads to an install, not just whether someone sees the ad or clicks on it, but if they add–

Jonathan Zittrain: That requires a whole infrastructure to, yeah.

Mark Zuckerberg: Okay, so then, yeah, so you build that out. It helps us show people more relevant ads.

It can help show more relevant content. Often a lot of these signals are super useful also on the security side for some of the other things that we’ve talked about, so that ends up being important. But fundamentally, you know, looking at the model today, it seems like you should have something like this ability to clear history. It turns out that it’s a much more complex technical project. I’d talked about this at our developer conference last year, about how I’d hoped that we’d roll it out by the end of 2018 and just, the plumbing goes so deep into all the different systems that it’s, that– But we’re still working on it and we’re going to do it. It’s just it’s taking a little bit longer.

Jonathan Zittrain: So clear history basically means I am as if a newb, I just show

Mark Zuckerberg: 是。

Jonathan Zittrain: Even though I’ve been using Facebook for a while, it’s as if it knows nothing about me and it starts accreting again.

Mark Zuckerberg: 是啊。

Jonathan Zittrain: And I’m just trying to think just as a plain old citizen, how would I make an informed judgment about how often to do that or when I should do it? What–?

Mark Zuckerberg: Well, hold on. Let’s go to that in a second.

Jonathan Zittrain: Okay.

Mark Zuckerberg: But one thing, just to connect the dots on the last conversation.

Jonathan Zittrain: 是啊。

Mark Zuckerberg: Clear history is a prerequisite, I think, for being able to do anything like subscriptions.

Right. Because, like, partially what someone would want to do if they were going to really actually pay for a not ad supported version where their data wasn’t being used in a system like that, you would want to have a control so that Facebook didn’t have access or wasn’t using that data or associating it with your account. And as a principled matter, we are not going to just offer a control like that to people who pay.

Right. That’s going to, if we’re going to give controls over data use, we’re going to do that for everyone in the community. So that’s the first thing that I think we need to go do.

Mark Zuckerberg: So that’s, so that’s kind of– This is sort of the how we’re thinking about the projects and this is a really deep and big technical project but we’re committed to doing it because I think it’s that’s what it’s there for. [ph?] +++

Jonathan Zittrain: And I guess like an ad block or somebody could then write a little script for your browser that would just clear your history every time you visit or something.

Mark Zuckerberg: Oh, yeah, no, but the plan would also be to offer something that’s an ongoing thing.

Jonathan Zittrain: I see.

Mark Zuckerberg: In your browser, but I think the analogy here is you kind of have, in your browser you have the ability to clear your cookies. And then, like, in some other place you have under your, like, nuclear settings, like, don’t ever accept any cookies in my browser. And it’s like, all right, your browser’s not really going to work that well.

Jonathan Zittrain: 是啊。

Mark Zuckerberg: But, but you can do that if you want because you should have that control. I think that these are part and parcel, right. It’s I think a lot of people might go and clear their history on a periodic basis because they– Or, or actually in the research that we’ve done on this as we’ve been developing it, the real thing that people have told us that they want is similar to cookie management, not necessarily wiping everything, because that ends in inconvenience of getting logged out of a bunch of things, but there are just certain services or apps that you don’t want that data to be connected to your Facebook account. So having the ability on an ad hoc basis to go through and say, “Hey, stop associating this thing,” is going to end up being a quite important thing that I think we want to try to deliver. So that’s, this is partially as we’re getting into this, it’s a more complex thing but I think it’s very valuable. And I think if any conversation around the– around subscriptions, I think you would want to start with giving people these, make sure that everyone has these kind of controls. So that’s, we’re kind of in the early phases of doing that. The philosophical downstream question of whether you also let people pay to not have ads, I don’t know. There were a bunch of questions around whether that’s actually a good thing, but I personally don’t believe that very many people would like to pay to not have ads. That all of the research that we have, it’s it may still end up being the right thing to offer that as a choice down the line, but all of the data that I’ve seen suggests that the vast, vast, vast majority of people want a free service and that the ads, in a lot of places are not even that different from the organic content in terms of the quality of what people are being able to see.

Jonathan Zittrain: 是啊。

Mark Zuckerberg: People like being able to get information from local businesses and things like that too, so. So there’s a lot of good there.

Jonathan Zittrain: 是啊。 Forty years ago it would have been the question of ABC versus HBO and the answer turned out to be yes.

Jonathan Zittrain: So you’re right. And people might have different things.

Mark Zuckerberg: 是啊。

Jonathan Zittrain: There’s a little paradox lingering in there about if something’s so important and vital that we wouldn’t want to deprive anybody of access to it but therefore nobody gets it until we figured out how to remove it for everybody.

Mark Zuckerberg: What we– [ph?] +++

Jonathan Zittrain: In other words, if I could buy my way out of ads and data collection it wouldn’t be fair to those who can’t and therefore we all subsist with it until the advances you’re talking about.

Mark Zuckerberg: Yeah, but I guess what I’m saying is on the data use, I don’t believe that that’s something that people should buy. I think the data principles that we have need to be uniformly available to everyone. That to me is a really important principle. It’s, like, maybe you could have a conversation about whether you should be able to pay and not see ads. That doesn’t feel like a moral question to me.

Jonathan Zittrain: 是。

Mark Zuckerberg: But the question of whether you can pay to have different privacy controls feels wrong. So that to me is something that in any conversation about whether we’d evolve towards having a subscription service, I think you have to have these controls first and it’s a very deep thing. A technical problem to go do, but we’re– that’s why we’re working through that.

Jonathan Zittrain: 是。 So long as the privacy controls that we’re not able to buy our way into aren’t controls that people ought to have. You know, it’s just the kind of underlying question of is the system as it is that we can’t opt out of a fair system. And that’s of course, you know, you have to go into the details to figure out what you mean by it. But let’s in the remaining time we have left

Mark Zuckerberg: How are we doing on time?

Jonathan Zittrain: We’re good. We’re 76 minutes in.

Mark Zuckerberg: All right, into–

Mark Zuckerberg: We’re going to get through maybe half the topics.

Jonathan Zittrain: Yeah, yeah, yeah.

Mark Zuckerberg: And I’ll come back and do another one later.

Jonathan Zittrain: I’m going to bring this in for a landing soon. On my agenda left includes such things as taking out of the box the independent review stuff, chat a little bit about that. I’d be curious, and this might be a nice thing, really, as we wrap up, which would be a sense of any vision you have for what would Facebook look like in 10 or 15 years and how different would it be than the Facebook of 10 years ago is compared to today. So that’s something I’d want to talk about. Is there anything big on your list that you want to make sure we talk about?

Mark Zuckerberg: Those are good. Those are good topics.

Jonathan Zittrain: Fair enough.

Mark Zuckerberg:

Jonathan Zittrain: So all right, the external review board.

Mark Zuckerberg: 是啊。 So one of the big questions that I have just been thinking about is, you know, we make a lot of decisions around content enforcement and what stays up and what comes down. And having gone through this process over the last few years of working on the systems, one of the themes that I feel really strongly about is that we shouldn’t be making so many of these decisions ourselves. You know, one of the ways that I try to reason about this stuff is take myself out of the position of being CEO of the company, almost like a Rawlsian perspective. If I was a different person, what would I want the CEO of the company to be able to do? And I would not want so many decisions about content to be concentrated with any individual. So–

Jonathan Zittrain: It is weird to see big impactful, to use a terrible word, decisions about what a huge swath of humanity does or doesn’t see inevitably handled as, like, a customer service issue. It does feel like a mismatch, which is what I hear you saying.

Mark Zuckerberg: So let’s, yeah, so I actually think the customer service analogy is a really interesting one. Right. So when you email Amazon, because they don’t, they make a mistake with your package, that’s customer support. Right. I mean, they are trying to provide a service and generally, they can invest more in customer support and make people happier. We’re doing something completely different, right.

When someone emails us with an issue or flags some content, they’re basically complaining about something that someone else in the community did. So it’s more like it’s almost more like a court system in that sense. Doing more of that does not make people happy because in every one of those transactions one person ends up the winner and one is the loser. Either you said that that content, that the content was fine, in which case the person complaining is upset, or you the someone’s content down, in which case the person is really upset because you’re now telling them that they don’t have the ability to express something that they feel is a valid thing that they should be able to express.

So in some deep sense while some amount of what we do is customer support, people get locked out of their account, et cetera, you know, we now have, like, more than 30,000 people working on content review and safety review, doing the kind of judgments that, you know, it’s basically a lot of the stuff, we have machine learning systems that flag things that could be problematic in addition to people in the community flagging things, but making these assessments of whether the stuff is right or not. So one of the questions that I just think about, it’s like, okay, well, you have many people doing this.

Regardless of how much training they have, we’re going to make mistakes, right. So you want to start building in principles around, you know, what you would kind of think of as due process, right. So we’re building in an ability to have an appeal, right, which already is quite good in that we are able to overturn a bunch of mistakes that the first line people make in making these assessments. But at some level I think you also want a level of kind of independent appeal, right, where if, okay, let’s say, so the appeals go to maybe a higher level of Facebook employee who is a little more trained in the nuances of the policies; but at some point, I think you also need an appeal to an independent groups, which is, like, is this policy fair? Was this–? Like is this piece of content really getting on the wrong side of the balance of free expression and safety? And I just don’t think at the end of the day that that’s something that you want centralized in a single company. So now the question is how do you design that system and that’s a real question, right, so that we don’t pretend to have the answers on this. What we’re basically working through is we have a draft proposal and we’re working with a lot of experts around the world to run a few pilots in the first half of this year that can hopefully we can codify into something that’s a longer term thing. But I just, I believe that this is just an incredibly important thing. As a person and if I take aside the role that I have as CEO of the company, I do not want the company being able to make all of those final decisions without a check and balance and accountability, so I want to use the position that I’m in to help build that kind of an institution.

Jonathan Zittrain: 是。 And when we talk about an appeal, then, it sounds like you could appeal two distinct things. One is this was the rule but it was applied wrong to me. This, in fact, was parody [ph?] so it shouldn’t be seen as near the line.

Mark Zuckerberg: 是啊。

Jonathan Zittrain: And I want the independent body to look at that. The other would be the rule is wrong. The rule should change because–

Mark Zuckerberg: 是啊。

Jonathan Zittrain: And you’re thinking the independent body could weigh in on both of those?

Mark Zuckerberg: 是啊。 Over time, I would like the role of the independent oversight board to be able to expand to do additional things as well. I think the question is it’s hard enough to even set something up that’s going to codify the values that we have around expression and safety on a relatively defined topic.

Jonathan Zittrain: 是啊。

Mark Zuckerberg: So I think the question is if you kind of view this as an experiment in institution building where we’re trying to build this thing that is going to have real power toJonathan Zittrain: 是。

Mark Zuckerberg: I mean, like, I will not be able to make a decision that overturns what they say. Which I think is good. I think also just it raises the stakes. You need to make sure we get this right, so.

Jonathan Zittrain: It’s fascinating. It’s huge. I think the way you’re describing it, I wouldn’t want to understate–

Mark Zuckerberg: 是啊。

Jonathan Zittrain: That this is not a usual way of doing business.

Mark Zuckerberg: Yeah, but I think it– I think this is– I really care about getting this right.

Jonathan Zittrain: 是啊。

Mark Zuckerberg: But I think you want to start with something that’s relatively well-defined and then hopefully expand it to be able to cover more things over time. So in the beginning I think one question that could come up is my understanding, I mean, it’s always dangerous talking about legal precedence when I’m, this might be one of my first times at Harvard Law School. I did not spend a lot of time here

Mark Zuckerberg: When I was an undergrad. But, you know what I mean, the, if the Supreme Court overturns something, they don’t tell Congress what the law should be, they just say there’s an issue here, right. And then basically there’s a process.行。 So if I’m getting that wrong

Mark Zuckerberg: 行。 I shouldn’t have done that.

Jonathan Zittrain: No, no. That’s quite honest. [ph?]

Mark Zuckerberg: I knew that was dangerous.

Mark Zuckerberg: And that that was a mistake.

Jonathan Zittrain: There are people who do agree with you.

Mark Zuckerberg: Okay. Oh, so that’s an open question that that’s how it works.

Jonathan Zittrain: It’s a highly debated question, yes.

Mark Zuckerberg: 行。

Jonathan Zittrain: There’s the I’m just the umpire calling balls and strikes and in fact, the first type of question we brought up, which was, “Hey, we get this is the standard. Does it apply here?” lends itself a little more to, you know, you get three swings and if you miss them all, like, you can’t keep playing. The umpire can usher you away from the home plate. This is, I’m really digging deep into my knowledge now of baseball. There’s another thing about, like,–

Mark Zuckerberg: That’s okay. I’m not the person who’s going to call you out on getting something wrong there.

Jonathan Zittrain: I appreciate that.

Mark Zuckerberg: That’s why I also need to have a librarian next to me at all times.

Jonathan Zittrain: Very good. I wonder how much librarians tend to know about baseball.

Mark Zuckerberg: Aww.

Jonathan Zittrain: But we digress. Ah, we’re going to get letters, mentions.

Mark Zuckerberg: 是啊。

Jonathan Zittrain: But whether or not the game is actually any good with a three strikes rule, maybe there should be two or four or whatever, starts to ask of the umpire more than just, you know, your best sense of how that play just went. Both may be something. Both are surely beyond standard customer service issues, so both could maybe be usefully externalized. What you’d ask the board to do in the category one kind of stuff maybe it’s true that, like, professional umpirage [ph?] could help us and there are people who are jurists who can do that worldwide. For the other, whether it’s the Supreme

Jonathan Zittrain: –court, or the so-called common law and state courts where often a state supreme court will be like, “Henceforth, 50 feet needs to be the height of a baseball net,” and like, “If you don’t agree, Legislature, we’ll hear from you, but until then it’s 50 feet.” They really do kind of get into the weeds. They derive maybe some legitimacy for decisions like that from being close to their communities, and it really regresses them to a question of: Is Facebook a global community, a community of 2.X billion people worldwide, transcending any national boundaries, and for which I think so far on these issues, it’s meant to be, “The rule is the rule,” it doesn’t really change in terms of service from one place to anotherversus how much do we think of it as somehow localized– whether or not localized through governmentbut where different local communities make their own judgments?

Mark Zuckerberg: That is one of the big questions. I mean, right now we have community standards that are global. We follow local laws, as you say. But I think the idea is– I don’t think we want to end up in a place where we have very different norms in different places, but you want to have some sense of representation and making sure that the body that can deliberate on this has a good diversity of views. So these are a lot of the things that we’re trying to figure out, is like: Well, how big is the body? When decisions are made, are they made by the whole body, or do you have panels of people that are smaller sets? If there are panels, how do you make sure that you’re not just getting a random sample that kind of skews in the values perspective towards one thing? So then there a bunch of mechanisms like, okay, maybe one panel that’s randomly constituted decides on whether the board will take up a question or one of the issues, but then a separate random panel of the group actually does the decisions, so that way you eliminate some risk that any given panel is going to be too ideologically skewed. So there’s a bunch of things that I think we need to think through and work through, but the goal on this is to, over time, have it grow into something that can provide greater accountability and oversight to potentially more of the hard questions that we face, but I think it’s so high-stakes that starting with something that’s relatively defined is going to be the right way to go in the beginning. So regardless of the fact that I was unaware of the controversy around the legal point that I made a second ago, I do think in our case it makes sense to start with not having this group say what the policies are going to be, but just have there be– have it be able to say, “Hey, we think that you guys are on the wrong side on this, and maybe you should rethink where the policy is because we think you’re on the wrong side.” There’s one other thing that I think is worth calling out, which is in a typical kind of judicial analog, or at least here in the U.S., my understanding, is there’s the kind of appeal route to the independent board considering an issue, but I also think that we want to have an avenue where we as the company can also just raise hard issues that come up to the board without having– which I don’t actually know if there’s any mechanism for that.

Jonathan Zittrain: It’s called an advisory opinion.

Jonathan Zittrain: But under U.S. federal law, it’s not allowed because of Article III Case or Controversy requirement, but state courts do it all the time. You’ll have a federal court sometimes say– because it’s a federal court but it’s deciding something under state law. It’ll be like, “I don’t know, ask Florida.” And they’ll be like, “Hey Florida,” and then Florida is just Florida.

Mark Zuckerberg: Sure. So I think that–

Jonathan Zittrain: So you can do an advisory opinion.

Mark Zuckerberg: –that’ll end up being an important part of this too. We’re never going to be able to get out of the business of making frontline judgments. We’ll have the AI systems flag content that they think is against policies or could be, and then we’ll have people– this set of 30 thousand people, which is growing– that is trained to basically understand what the policies are. We have to make the frontline decisions, because a lot of this stuff needs to get handled in a timely way, and a more deliberative process that’s thinking about the fairness and the policies overall should happen over a different timeframe than what is often relevant, which is the enforcement of the initial policy. But I do think overall for a lot of the biggest questions, I just want to build a more independent process.

Jonathan Zittrain: Well, as you say, it’s an area with fractal complexity in the best of ways, and it really is terra incognito, and it’d be exciting to see how it might be built out. I imagine there’s a number of law professors around the world, including some who come from civil rather than common law jurisdictions, who are like, “This is how it works over here,” from which you could draw. Another lingering question would be– lawyers often have a bad reputation. I have no idea why. But they often are the glue for a system like this so that a judge does not have to be oracular or omniscient. There’s a process where the lawyer for one side does a ton of work and looks at prior decisions of this board and says, “Well, this is what would be consistent,” and the other lawyer comes back, and then the judge just gets to decide between the two, rather than having to just know everything. There’s a huge tradeoff here for every appealed content decision, how much do we want to build it into a case, and you need experts to help the parties, versus they each just sort of come before Solomon and say, “This kind of happened,” and– or Judge Judy maybe is a more contemporary reference.

Mark Zuckerberg: Somewhere between the two, yeah.

Jonathan Zittrain: 是啊。 So it’s a lot of stuff– and for me, I both find myself– I don’t know if this is the definition of prurient– both excited by it and somewhat terrified by it, but very much saying that it’s better than a status quo, which is where I think you and I are completely agreeing, and maybe a model for other firms out there. So that’s the last question in this area that pops to my mind, which is: What part of what you’re developing at Facebook– a lot of which is really resource-intensive– is best thought of as a public good to be shared, including among basically competitors, versus, “That’s part of our comparative advantage and our secret sauce”? If you develop a particularly good algorithm that can really well detect fake news or spammers or bad actors– you’ve got the PhDs, you’ve got the processors– is that like, “In your face, Schmitter [ph?],” or is like, “We should have somebody that– some body– that can help democratize that advance”? And it could be the same to be said for these content decisions. How do you think about that?

Mark Zuckerberg: Yeah, so certainly the threat-sharing and security work that you just referenced is a good area where there’s much better collaboration now than there was historically. I think that that’s just because everyone recognizes that it’s such a more important issue. And by the way, there’s much better collaboration with governments now too on this, and not just our own here in the U.S., and law enforcement, but around the world with election commissions and law enforcement, because there’s just a broad awareness that these are issues and that–

Jonathan Zittrain: Especially if you have state actors in the mix as the adversary.

Mark Zuckerberg: 是。 So that’s certainly an area where there’s much better collaboration now, and that’s good. There’s still issues. For example, if you’re law enforcement or intelligence and you have developed a– “source” is not the right word– but basically if you’ve identified someone as a source of signals that you can watch and learn about, then you may not want to come to us and tell us, “Hey, we’ve identified that this state actor is doing this bad thing,” because then the natural thing that we’re going to want to do is make sure that they’re not on our system doing bad things, or that they’re not– either they’re not in the system at all or that we’re interfering with the bad things that they’re trying to do. So there’s some mismatch of incentives, but as you build up the relationships and trust, you can get to that kind of a relationship where they can also flag for you, “Hey, this is what we’re at.” So I just think having that kind of baseline where you build that up over time is helpful. And I think on security and safety is probably the biggest area of that kind of collaboration now, across all the different types of threats; not just election and democratic process type stuff, but any kind of safety issue. The other area where I tend to think about what we’re doing is– it should be open– is just technical infrastructure overall. I mean, that is probably a less controversial piece, but we open-source a lot of the basic stuff that runs our systems, and I think that that is a– that’s a contribution that I’m quite proud of that we do.

We have sort of pioneered this way of thinking about how people connect, and the data model around that is more of a graph, and the idea of graph database and a lot of the infrastructure for being able to efficiently access that kind of content I think is broadly applicable beyond the context of a social network.

When I was here as an undergrad, even though I wasn’t here for very long, I studied psychology and computer science, and to me– I mean, my grounding philosophy on this stuff is that basically people should be at the center of more of the technology that we build. I mean, one of the early things that I kind of recognized when I was a student was like– at the time, there were internet sites for finding almost anything you cared about, whether it’s books or music or news or information or businesses– but as people, we think about the world primarily in terms of other people, not in terms of other objects, not cutting things up in terms of content or commerce or politics or different things, but it’s like– the stuff should be organized around the connections that people have, where people are at the centerpiece of that, and one of the missions that I care about is over time just pushing more technology development in the tech industry overall to develop things with that mindset. I think– and this is a little bit of a tangentbut the way that our phones work today, and all computing systems, organized around apps and tasks is fundamentally not how people– how our brains work and how we approach the world. It’s not– so that’s one of the reasons why I’m just very excited longer-term about especially things like augmented reality, because it’ll give us a platform that I think actually is how we think about stuff. We’ll be able to bring the computational objects into the world but fundamentally we’ll be interacting as people around them. The whole thing won’t be organized around an app or a task; it’ll be organized around people, and that I think is a much more natural and human system for how our technology should be organized. So opensourcing all of that infrastructure– to do that, and enabling not just us but other companies to kind of get that mindset into more of their thinking and the technical underpinning of that, is just something that I care really deeply about.

Jonathan Zittrain: Well, this is nice, and this is bringing us in for our landing, because we’re talking about 10, 20, 30 years ahead. As a term of art, I understand augmented reality to mean, “I’ve got a visor”version 0.1 was Google Glass– something where I’m kind of out in the world but I’m literally online at the same time because there’s data coming at me in some– that’s what you’re talking about, correct?

Mark Zuckerberg: Yeah, although it really should be glasses like what you have. I think we’ll probablymaybe they’ll have to be a little bigger, but not too much bigger or else it would start to get weird.

Mark Zuckerberg: So I don’t think a visor is going to catch. I don’t think anyone is psyched about that feature.

Jonathan Zittrain: And anything involving surgery starts to sound a little bad too.

Mark Zuckerberg: No, no, we’re definitely focused on–

Mark Zuckerberg: –on external things. Although–

Jonathan Zittrain: Like, “Don’t make news, don’t make news, don’t make news.”

Mark Zuckerberg: 不不不。 Although we have showed this demo of basically can someone type by thinking, and of course when you’re talking about brain-computer interfaces, there’s two dimensions of that work. There’s the external stuff, and there’s the internal stuff, and invasive, and yes, of course if you’re actually trying to build things that everyone is going to use, you’re going to want to focus on the noninvasive things.

Jonathan Zittrain: 是。 Can you type by thinking?

Mark Zuckerberg: You can.

Jonathan Zittrain: It’s called a Ouija Board. No. But you’re subvocalizing enough or there’s enough of a read of–

Mark Zuckerberg: 不不不。 So there’s actually a bunch of the research here– there’s a question of throughput and how quickly can you type and how many bits can you express efficiently, but the basic foundation for the research is someone– a bunch of folks who are doing this research showed a bunch of people images– I think it was animals– so, “Here’s an elephant, here’s a giraffe”– while having kind of a net on their head, noninvasive, but shining light and therefore looking at the level of blood activity andjust blood flow and activity in the brain– trained a machine learning basically on what the pattern of that imagery looked like when the person was looking at different animals, then told the person to think about an animal, right? So think about– just pick one of the animals to think about, and can predict what the person was thinking about in broad strokes just based on matching the neural activity. So the question is, so you can use that to type.

Jonathan Zittrain: Fifth amendment implications are staggering.

Jonathan Zittrain: 抱歉。

Mark Zuckerberg: Well, yes. I mean, presumably this would be something that someone would choose to use a product. I’m not– yeah, yeah. I mean, yes, there’s of course all the other implications, but yeah, I think that this is going to be– that’s going to be an interesting thing down the line.

Jonathan Zittrain: But basically your vision then for a future–

Mark Zuckerberg: I don’t know how we got onto that.

Jonathan Zittrain: You can’t blame me. I think you brought this up.

Mark Zuckerberg: I did, but of all the things that– I mean, this is exciting, but we haven’t even covered yet how we should talk about– tech regulation and all this stuff I figured we’d get into. I mean, we’ll be here for like six or seven hours. I don’t know how many days you want to spend here to talking about this, but–

Jonathan Zittrain: “We’re here at the Zuckerberg Center and hostage crisis.”

Jonathan Zittrain: “The building is surrounded.”

Mark Zuckerberg: 是啊。 But I think a little bit on future tech and research is interesting too, so.

Jonathan Zittrain: Please.

Mark Zuckerberg: Yeah, we’re good.

Jonathan Zittrain: Oh, we did cover it, is what you’re saying.

Mark Zuckerberg: I mean, but going back to your question about what– if this is the last topic– what I’m excited about for the next 10 or 20 years– I do think over the long term, reshaping our computing platforms to be fundamentally more about people and how we process the world is a really fundamental thing. Over the nearer term– so call it five years– I think the clear trend is towards more private communication. If you look at all of the different ways that people want to share and communicate across the internet– but we have a good sense of the cross-strength, everything from one-on-one messages to kind of broadcasting publicly– the thing that is growing the fastest is private communication.对?

So between WhatsApp and Messenger, and Instagram now, just the number of private messages– it’s about 100 billion a day through those systems alone, growing very quickly, growing much faster than the amount that people want to share or broadcast into a feed-type system. Of the type of broadcast content that people are doing, the thing that is growing by far the fastest is stories.对?

So ephemeral sharing of, “I’m going to put this out, but I want to have a timeframe after which the data goes away.” So I think that that just gives you a sense of where the hub of social activity is going. It also is how we think about the strategy of the company. I mean, people– when we talk about privacy, I think a lot of the questions are often about privacy policies and legal or policy-type things, and privacy as a thing not to be breached, and making sure that you’re within the balance of what is good. But I actually think that there’s a much more– there’s another element of this that’s really fundamental, which is that people want tools that give them new contexts to communicate, and that’s also fundamentally about giving people power through privacy, not just not violating privacy, right? So not violating privacy is a backstop, but actually– you can kind of think about all the success that Facebook has had– this is kind of a counterintuitive thing– has been because we’ve given people new private or semi-private ways to communicate things that they wouldn’t have had before.

So thinking about Facebook as an innovator in privacy is certainly not the mainstream view, but going back to the very first thing that we did, making it so Harvard students could communicate in a way that they had some confidence that their content and information would be shared with only people within that community, there was no way that people had to communicate stuff at that scale, but not have it either be completely public or with just a small set of people before. And people’s desire to be understood and express themselves and be able to communicate with all different kinds of groups is, in the experience that I’ve had, nearly unbounded, and if you can give people new ways to be able to communicate safely and express themselves, then that is something that people just have a deep thirst and desire for.

So encryption is really important, because I mean, we take for granted in the U.S. that there’s good rule of law, and that the government isn’t too much in our business, but in a lot of places around the world, especially where WhatsApp is the biggest, people can’t take that for granted. So having it so that you really have confidence that you’re sharing something one-on-one and it’s not– and it really is one-on-one, it’s not one-on-one and the government there– actually makes it so people can share things that they wouldn’t be comfortable otherwise doing it. That’s power that you’re giving people through building privacy innovations.

Stories I just think is another example of this, where there are a lot of things that people don’t want as part of the permanent record but want to express, and it’s not an accident that that is becoming the primary way that people want to share with all of their friends, not putting something in a feed that goes on their permanent record. There will always be a use for that too– people want to have a record and there’s a lot of value that you can build around that– you can have longer-term discussions– it’s harder to do that around stories. There’s different value for these things. But over the next five years, I think we’re going to see all of social networking kind of be reconstituted around this base of private communication, and that’s something that I’m just very excited about. I think that that’s– it’s going to unlock a lot of people’s ability to express themselves and communicate things that they haven’t had the tools to do before, and it’s going to be the foundation for building a lot of really important tools on top of that too.

Jonathan Zittrain: That’s so interesting to me. I would not have predicted that direction for the next five years. I would have figured, “Gosh, if you already know with whom you want to speak, there are so many tools to speak with them,” some of which are end-to-end, some of which aren’t, some of which are rollyourown and open-source, and there’s always a way to try to make that easier and better, but that feels a little bit to me like a kind of crowded space, not yet knowing of the innovations that might lie ahead and means of communicating with the people you already know you want to talk to. And for that, as you say, if that’s where it’s at, you’re right that encryption is going to be a big question, and otherwise technical design so that if the law comes knocking on the door, what would the company be in a position to say.

This is the Apple iPhone Cupertino– sorry, San Bernardino case– and it also calls to mind will there be peer-to-peer implementations of the things you’re thinking about that might not even need the server at all, and it’s basically just an app that people use, and if it’s going to deliver an ad, it can still do that appside, and how much governments will abide it. They have not, for the most part, demanded technology mandates to reshape how the technology works. They’re just saying, “If you’ve got it”– in part you’ve got it because you want to serve ads– “we want it.” But if you don’t even have it, it’s been rare for the governments to say, “Well, you’ve got to build your system to do it.” It did happen with the telephone system back in the day. CALEA, the Communications Assistance to Law Enforcement Act, did have federal law in the United States saying, “If you’re in the business of building a phone network, AT&T, you’ve got to make it so we can plug in as you go digital,” and we haven’t yet seen those mandates in the internet software side so much. So we can see that coming up again. But it’s so funny, because if you’d asked me, I would have figured it’s encountering people you haven’t met before and interacting with them, for which all of the stuff about air traffic control of what goes into your feed and how much your stuff gets shared– all of those issues start to rise to the fore, and it gets me thinking about, “I ought to be able to make a feed recipe that’s my recipe, and fills it according to Facebook variables, but I get to say what the variables are.” But I could see that if you’re just thinking about people communicating with the people they already know and like, that is a very different realm.

Mark Zuckerberg: It’s not necessarily– it’s not just the people that you already know. I do think– we’ve really focused on friends and family for the last 10 or 15 years, and I think a big part of what we’re going to focus on now is around building communities in different ways and all the utility that you can build on top of, once you have a network like this in place. So everything from how people can do commerce better to things like dating, which is– a lot of dating happens on our services, but we haven’t built any tools specifically for that.

Jonathan Zittrain: I do remember the Facebook joint experiment– “experiment” is such a terrible wordstudy, by which one could predict when two Facebook members are going to declare themselves in a relationship, months ahead of the actual declaration. I was thinking some of the ancillary products were in-laws.

Mark Zuckerberg: That was very early.是啊。 So you’re right that a lot of this is going to be about utility that you can build on top of it, but a lot of these things are fundamentally private, right? So if you’re thinking about commerce, that people have a higher expectation for privacy, and the question is: Is the right context for that going to be around an app like Facebook, which is broad, or an app like Instagram?

I think part of it is– the discovery part of it, I think we’ll be very well served there– but then we’ll also transition to something that people want to be more private and secure. Anyhow, we could probably go on for many hours on this, but maybe we should save this for the Round 2 of this that we’ll do in the future.

Jonathan Zittrain: Indeed. So thanks so much for coming out, for talking at such length, for covering such a kaleidoscopic range of topics, and we look forward to the next time we see you.

Mark Zuckerberg: 是啊。谢谢。

Jonathan Zittrain: 谢谢。

扎克伯格的20K字道德谈话 – TechCrunch的亮点和成绩单


马克扎克伯格说 Facebook可能会让人们为看不到广告而付费,但向用户收取额外的隐私控制权是不合适的。这只是首席执行官在第一次公开会谈中分享的令人着迷的哲学观点之一,他承诺将成为他2019年个人挑战的一部分。

在哈佛大学法学院和计算机科学教授Jonathan Zittrain谈到他退学的大学校园时,扎克伯格设法逃脱了100分钟的谈话,只有几个失言。有一次,他说“我们绝对不希望一个社会,每个人的起居室里都有一台摄像机,观看那些谈话的内容”。 Zittrain迅速提醒他,这正是Facebook 门户网站是,而扎克伯格试图通过说门户网站的录音将被加密来转移。

后来扎克伯格提到“广告,在许多地方甚至与人们能够看到的质量方面的有机内容有所不同”,这是对个人照片和状态更新的非常悲伤和嘲弄的评估分享。当他提出众多的事实核查时,Zittrain表示,这可能成为“天体冲浪”的途径,用户群提供有目的的偏见信息以促进他们的利益,就像政治团体支持他们的对手的事实是谎言的投票。虽然有时避免在问题上采取强硬立场,但扎克伯格在其他方面却相对合乎逻辑且连贯。

政策与政府合作

首席执行官谈到了他的边缘内容政策,该政策悄然降低了接近打破裸体,仇恨言论等政策的帖子,否则这些政策是最耸人听闻的,获得最多分配,但不会让人感觉良好。扎克伯格在这里指出了一些进展,他说:“去年我们所做的很多事情都集中在这个问题上,它确实提高了服务的质量,人们也很欣赏这一点。”

这与扎克伯格一样,考虑到Facebook作为“数据信托”的角色,而不是必须屈服于用户的冲动或优先考虑其短期股价,该公司试图做出符合其社区最佳长远利益的事情。 “这里有一个很难的平衡 – 我的意思是,如果你在谈论人们想要的东西与他们想要的东西 – 你知道的话,人们通常会对他们实际所做的事情表现出的偏好表现出他们想要什么而不是他们想要什么。认为他们想要“他说。从本质上讲,人们可能会点击clickbait,即使它不能让他们感觉良好。

在与政府合作时,扎克伯格解释了激励措施并不总是一致的,比如当执法部门监控某人不小心丢失有关其犯罪和合作者的线索时。政府和社会可能会受益于持续的监控,但Facebook可能希望在发现账户后立即暂停账户。 “但是当你建立起关系和信任时,你可以得到那种他们也能为你举旗的关系,'嘿,这就是我们所处的',”暗示Facebook可能会故意让那个人留下来指责自己协助当局。

但是,政府之间的分歧可能会爆发,扎克伯格指出,“我们已经让员工被投入监狱,因为我们已经获得法院命令,我们必须转换我们无论如何都不会的数据,但我们不能,因为它是加密的。”这可能是对2016年Facebook的拉丁美洲地区副总裁Diego Dzodan因WhatsApp的加密被逮捕而无法为该案件提供证据的逮捕。

分散Facebook

加密和分散的权衡是一个中心主题。他讨论了虽然许多人担心加密会如何掩盖非法或冒犯活动,但Facebook并不一定要查看某人的实际内容来确定他们是否违反了政策。 “其中一个 – 我想,对我来说有点令人惊讶 – 过去几年从事内容治理和执法工作的调查结果是,通过模式识别虚假账户和上游坏账户通常会更有效活动而不是看内容“扎克伯格说。

随着Facebook迅速建立区块链团队,可能为无偿支付或分散应用程序的身份层启动加密货币,Zittrain询问是否有可能让用户控制他们提供他们的个人资料信息的其他应用程序,而无需Facebook作为中间人。

圣何塞,加利福尼亚州 – 5月01日:Facebook首席执行官马克扎克伯格(Justin Sullivan / Getty Images拍摄)

扎克伯格强调,在Facebook的规模上,转向效率较低的分布式架构将极其“计算密集”,尽管最终可能实现。相反,他说:“我一直在考虑的事情之一就是使用我可能感兴趣的区块链 – 尽管我还没有找到一种方法来解决这个问题,是围绕认证和带来的 – 基本上授予访问您的信息和不同服务的权限。所以,基本上,用完全分发的东西取代我们对Facebook Connect的概念。“对于那些知道Facebook无法从用户中删除它们的开发者来说,这可能会很有吸引力。

问题在于,如果开发人员滥用用户,扎克伯格担心“在完全分布式系统中,没有人可以切断开发人员的访问权限。所以,问题是如果你有一个完全分布式的系统,它一方面可以极大地增强个人的能力,但它确实提高了风险,它可以解决你的问题,那么,同意的界限是什么以及人们如何真正实际有效地知道他们同意一个机构?“

没有“支付隐私”

但也许最新颖和最紧迫的是扎克伯格对Facebook应该让人们付钱去除广告的次要问题的评论。 “你开始进入一个原则性的问题,即'我们是否会让人们付费以对数据使用进行不同的控制而不是其他人?'而我的回答是难以理解的。”Facebook承诺永远运营免费版本所以每个人都可以发声。然而,包括我在内的一些人建议,对Facebook进行高级无广告订阅可以帮助我们最大限度地减少数据收集和参与度,尽管它可能会通过将最富裕和期望的用户从广告定位池中拉出来打破Facebook的收入机器。

“我所说的是关于数据的使用,我不认为这是人们应该购买的东西。我认为我们需要统一的数据原则可供所有人使用。这对我来说是一个非常重要的原则“扎克伯格扩大了。 “就像,也许你可以谈谈你是否应该支付而不是看广告。这对我来说不是一个道德问题。 但是,你是否可以通过不同的隐私控制付费的问题是错误的。“

早在5月份,扎克伯格宣布Facebook将在2018年建立一个清除历史按钮,删除社交网络收集的有关您的所有网络浏览数据,但该数据与公司系统的深度集成推迟了发布。研究表明,用户不希望登出所有Facebook Connected服务的不便,但他们想隐藏公司的某些数据。

“我认为,明确的历史是一个先决条件,因为它能够做任何像订阅这样的事情。因为,部分是某人想要做的事情,如果他们真的要支付一个不支持广告的版本,而他们的数据没有在这样的系统中使用,你会希望拥有一个控件,以便Facebook没有无法访问或未使用该数据或将其与您的帐户相关联。作为一个原则性的问题,我们不会仅仅向付钱的人提供这样的控制。“

在扎克伯格最近所做的所有道歉,承诺和预测中,这一承诺可能会灌输最大的信心。虽然有些人可能会认为扎克伯格是一个吸收和利用我们尽可能多的个人信息的数据暴君,但至少他不愿意跨越这些线路。 Facebook可能会试图向您收取隐私费,但事实并非如此。鉴于Facebook在社交网络和消息传递方面的主导地位以及扎克伯格对该公司的投票控制权,一个贪婪的人可能会让互联网变得更糟。

转录 – 马克·扎克伯格在哈佛大学/第一次个人挑战2019年

Jonathan Zittrain: 很好。所以,谢谢你,Mark,来自Techtopia项目和哈佛大学法学院的“互联网与社会”课程,与我和我们的学生们进行交流。我们很高兴有机会谈论任何问题,我们应该直接进入。所以,隐私,自治和信息受托人。

马克·扎克伯格: 行!

Jonathan Zittrain: 喜欢谈论这个。

马克·扎克伯格: 是啊!我在“纽约时报”上看过你的作品。

Jonathan Zittrain: 有标题的人说,“马克扎克伯格可以解决这个烂摊子”吗?

马克·扎克伯格: 是啊。

Jonathan Zittrain: 是啊。

马克·扎克伯格: 虽然那是去年。

Jonathan Zittrain: 确实如此!你在暗示这一切都是固定的吗?

马克·扎克伯格: 不,不。

Jonathan Zittrain: 好的。所以-

Jonathan Zittrain: 我建议我很好奇你是否仍然认为我们可以解决这个烂摊子?

Jonathan Zittrain: 啊!

Jonathan Zittrain: 我希望-

Jonathan Zittrain: “希望永不止息”-

马克·扎克伯格: 是的,你走了。

Jonathan Zittrain: – 是我的座右铭。所以,好吧,让我快速描述一下这个想法,即造币和脚手架来自耶鲁大学的同事杰克巴尔金。我们两个人一直在进一步发展它。有一些标准数量的隐私问题,您可能会对这些问题有所了解,与传达他们知道他们正在传达的信息或他们不确定信息的人有关,但是我们以前称之为“鼠标粪便”当他们在互联网的椽子上奔跑并留下痕迹时。然后谈论的标准方式是你要确保那些东西不会去你不希望它去的地方。我们称之为“信息隐私”。我们不希望人们知道我们想要的东西,也许我们的朋友只知道。在像Facebook这样的地方,你应该能够调整你的设置,然后说:“给他们这个,而不是那个。”但也有一些方法,我们共享的东西仍然可以被使用反对我们,感觉就像“嗯,你同意了”,可能不会结束讨论。我的同事杰克带来的类比是一个医生和一个病人或一个律师和一个客户 – 或者有时在美国,但并非总是 – 一个财务顾问和客户说这些专业人士有一定的专业知识,他们得到客户和患者的各种敏感信息的信任,因此,即使他们自己的利益发生冲突,他们也有责任为这些客户的利益行事。而且,也许只是一个快速的让我们开始。我在2014年写了一篇文章,也许你读过,这是一个关于选举的假设,它说:“假设Facebook有一个关于哪个候选人应该获胜的观点,并且他们提醒那些可能投票给有利于候选人的人。这是选举日,“而对其他人来说,他们只是送了一张猫照片。那会错吗?而且我发现 – 我不知道它是否违法;这对我来说似乎是错误的,可能是信托方法捕捉​​到了错误的原因。

马克·扎克伯格: 行。所以,我想我们可能花一整个时间来讨论这个问题!

马克·扎克伯格: 所以,我读了你的专栏文章,我还阅读了Balkin关于信息受托人的博文。而且我也和他谈过了。

Jonathan Zittrain: 大。

马克·扎克伯格: 而且 – 乍一看,通过这种方式阅读,我的反应是有很多有意义的。对?我们与使用我们服务的人建立信托关系的想法有点直观 – 我们如何思考我们如何构建我们正在构建的东西。所以,通过阅读这一点,就好了,你知道,很多人似乎都有这样的错误观念:当我们整理新闻提要并进行排名时,我们会有一群人专注于最大化人们花费的时间,但这不是我们给他们的目标。我们告诉团队成员,“生产服务” – 我们认为这将是最高质量的 – 我们试图让人们进入并告诉我们,我们可以做的内容可能会显示将要发生的事情 – 他们告诉我们他们想要看到什么,然后我们构建那种能够预测并建立该服务的模型。

Jonathan Zittrain: 顺便说一句,总是如此 – 或者 –

马克·扎克伯格: 没有。

Jonathan Zittrain: – 那是你通过一些课程调整得到的地方吗?

马克·扎克伯格: 通过课程调整。我的意思是,你开始使用更简单的信号,例如人们在Feed中点击的内容,但是你很快就会学到,“嘿,这会让你达到最佳状态”,对吧?如果你专注于人们点击的内容并预测人们点击的内容,那么你选择点击诱饵。对?所以,很快你就会从真实的人们的真实反馈中意识到,这实际上并不是人们想要的。你不打算通过这样做来建立最好的服务。所以,你引进了人们并且实际上有这些面板 – 我们称之为“实现真理” – 你向人们展示了可以展示给他们的所有候选人,你有人说,“最有意义的是什么?我希望这个系统向我们展示?所以,所有这些都说明了我们自己的自我形象和我们正在做的事情是我们作为受托人并试图为人们建立最好的服务。在那里,我认为最终变得有趣的问题是,谁在法律意义上决定什么,或者在人们的最佳利益方面的政策意识?对?因此,我们每天都会进来思考,“嘿,我们正在构建一项服务,我们正在对新闻进行排名,试图向人们展示最相关的内容,并假设数据支持;一般来说,人们希望我们向他们展示最相关的内容。但是,在某种程度上,您可以问一个问题,即“谁来决定排名新闻或显示相关广告?”或者我们选择处理的任何其他事情实际上符合人们的利益。我们正竭尽全力尝试构建服务 [ph?] 我们认为是最好的。在一天结束时,很多都是基于“人们选择使用它。”对吗?因为,显然,他们从中获得了一些价值。但是,所有这些问题就像你说的那样,你有 – 人们可以在哪里有效地给予同意而不是。

Jonathan Zittrain: 是。

马克·扎克伯格: 所以,我认为这里有很多有趣的问题可以解开你如何实现这样的模型。但是,我认为,在我们运营这家大公司方面,我认为,我认为这是一个很高的层面。在社会中,人们信任社会制度是很重要的。显然,我认为我们现在处于这样一个位置:人们对大型互联网公司,特别是Facebook有很多疑问,我确实认为那里有正确的监管和规则只是提供了一种社会护栏框架,人们可以相信,这些公司在我们都同意的框架内运作。这比他们做任何他们想做的更好。我认为这会给人们信心。因此,我认为,搞清楚这个框架是一件非常重要的事情。而且我相信我们会谈论它,因为它涉及 –

Jonathan Zittrain: 是。

马克·扎克伯格: – 今天的很多内容领域。但是要谈到你是怎么回事 – “谁决定人们最大利益是什么,如果不是人自己呢?”Jonathan Zittrain: 是。

马克·扎克伯格: – 这是一个非常有趣的问题。

Jonathan Zittrain: 是的,所以,我们一定要谈谈这个问题。那么,在我们的议程上是“谁决定?”的问题。

马克·扎克伯格: 行。

Jonathan Zittrain: 其他议程项目包括 – 正如您所说,信托框架对您来说听起来不错 – 医生,患者,Facebook用户。而且我听到你说这就是你想要最终结束的地方。关于人们想要什么以及他们想要什么,有一些有趣的问题。

马克·扎克伯格: 是啊。

Jonathan Zittrain: 人们会说“1月1日,我想要的 – ”新年的决议 – “是健身房会员。”然后在1月2日,他们不想去健身房。他们想要去健身房,但他们从未完全成功。然后,当然,提前一年的薪酬商业模式,他们知道你永远不会出现这种情况。我想一个特定的领域可能会在广告方面深入研究,也许是个性化之间的二分法,它是否会被剥削?现在,可能有东西 – 我知道Facebook,例如,尽可能禁止发薪日贷款。

马克·扎克伯格: 嗯。

Jonathan Zittrain: 这只是一个实质性的领域,它就像是,“好吧,我们不想这样做。”

马克·扎克伯格: 嗯。

Jonathan Zittrain: 但是,当我们考虑良好的个性化,以便Facebook知道我有一只狗,而不是一只猫,然后一个定位者可以给我提供狗粮而不是猫食。如果不是现在,广告平台可以为广告定位者提供一些“我只是失去了我的宠物,我真的很沮丧,我准备做出一些我可能会后悔的决定”的未来日子怎么样?以后,但是当我制作它们时 – “

马克·扎克伯格: 嗯。

Jonathan Zittrain: “ – 我要去制作它们。”所以,这是开球的最佳时机

马克·扎克伯格: 是啊。

Jonathan Zittrain: – 立方氧化锆或其他任何东西 – 。

马克·扎克伯格: 嗯。

Jonathan Zittrain: 在我看来,一个信托方法会说,理想情况下 – 我们如何到达那里我不知道,但理想情况下我们不会允许那种方法使用我们从他们那里收集到的信息知道他们在一个艰难的地方 –

马克·扎克伯格: 是啊。

Jonathan Zittrain: – 然后利用它们。但我不知道。我不知道你会怎么想这样的事情。你能写一个算法来检测那样的东西吗?

马克·扎克伯格: 好吧,我认为其中一个关键原则是我们试图长期经营这家公司。而且我认为人们认为很多事情 – 如果你只是试图优化下一季度的利润或类似的东西,你可能想做一些人们可能会在短期内做的事,但从长远来看会怨恨。但是,如果你真的关心建立社区并实现这一使命并长期建立公司,我认为你比人们通常认为的公司更加一致。之前它回到了这个想法,在那里我认为我们的自我形象在很大程度上就像你所说的那样 – 在这种信托关系中 – 我们可能会经历很多不同的例子。我的意思是,我们不希望向人们展示他们将要点击和参与的内容,但后来感觉他们浪费了他们的时间。在那里我们不想向他们展示他们将基于此做出决定然后后悔的事情。我的意思是,这里有一个很难的平衡 – 我的意思是,如果你在谈论人们想要的东西和他们想要的东西 – 你知道,通常人们对他们实际所做的事情表现出的偏好表现出他们想要的更深层次的感觉。他们认为他们想要什么。因此,我认为在某些东西是剥削性的东西与什么是真实的东西之间存在一个问题,但不是你想要的东西。

Jonathan Zittrain: 是。

马克·扎克伯格: 这是一件非常难以接受的事情。

Jonathan Zittrain: 是。

马克·扎克伯格: 但在很多这些案例中,我运营公司的经验是,你开始建立一个系统,你有相对简单的信号开始,你随着时间的推移建立越来越复杂的模型,试图考虑更多的人们关心的关于。我们可以通过所有这些例子来完成。我认为新闻源和广告可能是两个最复杂的排名榜样 –

Jonathan Zittrain: 是。

马克·扎克伯格: – 我们有。但是,就像我们在一秒钟之前谈论的那样,当我们开始使用这些系统时,我的意思是,只是从新闻源开始 – 但你也可以在广告上做到这一点 – 你知道,最天真的信号,对,是什么人们点击或者“喜欢”的人。但是你很快意识到这不是近似的东西,但它是人们真正关心的基本事实的粗略近似。所以,你真正想要的是尽可能让真实的人看到真正的内容候选人,并以多维的方式告诉你什么对他们很重要,并尝试建立模拟它的系统。然后你想要防止下行保守。那么,你的发薪日贷款的例子 – 当我们过去谈到这个时,你 – 你已经向我提出“你怎么知道发薪日贷款什么时候会被剥削?”的问题对吧? “如果你的目标是处境不好的人?”我们的回答是,“好吧,我们真的不知道什么时候会被剥削,但我们认为整个类别可能存在巨大的风险,所以我们只是禁止它 –

Jonathan Zittrain: 对。这使它成为一个简单的案例。

马克·扎克伯格: 是。而且我认为更难的情况是当存在显着的上升空间和显着的下行空间并且你想要权衡它们时。所以,我的意思是,例如,一旦我们开始在防止选举干预方面做出非常大的努力,最初提出的一个想法就是“为什么我们不禁止所有与政治相关的广告?”他们很快就会进入,好吧,那是什么政治广告?经典的法律定义是围绕选举和候选人的事情,但实际上并不是俄罗斯和其他人主要做的事情。对?它 – 你知道,我们看到的很多问题都围绕着问题广告,对于什么是社会问题,基本上是缝纫部门。所以,好吧,我认为你不会妨碍人们的言论和能力,以促进和倡导他们关心的问题。那么,问题是“好吧,好吧,那么,什么是正确的平衡?”你如何确保提供适当级别的控制,那些不应该参与这些控制的人辩论不是或至少你提供正确的透明度。但我认为我们从原来的问题中略微转了一下Jonathan Zittrain: 是。

马克·扎克伯格: 但是 – 但是,是的。那么,让我们回到原来的位置

Jonathan Zittrain: 好吧,这是 – 这是一种可能推动它前进的方式,这就是:像Facebook这样完整的平台提供了很多机会来塑造人们看到的东西并可能帮助他们进行那些推动,现在是时候走了到健身房或避免他们陷入发薪日贷款的掠夺。这是一个问题,只要有平台去做,现在是否有道德义务去做,帮助人们实现美好生活?

马克·扎克伯格: 嗯。

Jonathan Zittrain: 而且我担心任何一家公司都必须承担这样的负担,比如说,如果不是每一个人中最完美,最合理的新闻报道,那么这是多么重要? 25亿活跃用户?这样的事情。

马克·扎克伯格: 是啊。在那个顺序上。

Jonathan Zittrain: 一直以来可能会有一些方法开始一点点进入工程设计的事情会说,“好吧,事后看来,有没有办法设计这个,以便赌注不是那么高,不是“专注于公正,”天哪,Facebook是这么做的吗?“好像全世界只有一份报纸或者一两份报纸,就像是,”那么,那么“纽约时报”选择放什么在它的主页上,如果它是唯一的报纸,那将具有特别重要性。“

马克·扎克伯格: 嗯。

Jonathan Zittrain: 因此,作为一个技术问题,这个会议室的一些学生有机会听到万维网发明者蒂姆·伯纳斯 – 李(Tim Berners-Lee)的观点,他对“固体”这个东西有了新的想法。我不知道你是否听说过Solid。它是一种协议,而不是一种产品。所以,今天没有车可以离开。但它的想法是允许人们拥有他们在网络上运行时产生的数据,最终在他们自己的数据锁定器中。现在,对于像蒂姆这样的人来说,它可能意味着在桌子下面的储物柜中,他可以在半夜醒来,看看他的数据在哪里。对于其他人来说,它可能意味着某个地方的伊拉克,可能是由一个正在寻找他们的受托人守卫,我们把钱存入银行的方式,然后我们可以在晚上睡觉知道银行家们 – 这可能不是2019年最好的比喻,但看着。

马克·扎克伯格: 我们会到达那里。

Jonathan Zittrain: 我们会到达那里。但Solid说,如果你这样做,那么人们会 – 或者他们有用的代理人 – 能够说,“好吧,Facebook即将到来。它需要我的以下数据,并包括我在使用它时产生的关于我的数据,但是存储回我的储物柜中,每次都需要回到我的井里抽水。这样,如果我想切换到Schmacebook或其他东西,它仍然在我的井中,我可以立即授予Schmacebook的许可,看看它,我不必做一种数据啜食然后重新上传它。它是一种完全分布式的数据思考方式。而且从工程角度来看,我很好奇,这似乎可以解决Facebook所拥有的大小和纺车轮数量的问题。

马克·扎克伯格: Yeah-

Jonathan Zittrain: – 我很好奇你对这样一个想法的反应。

马克·扎克伯格: 所以,我认为这很有趣。当然,Facebook正在进行的计算水平以及我们正在构建的所有服务都是以分布式方式进行的。我的意思是,我认为作为一个基本模型,我认为我们将在未来五年内构建数据中心容量以及我们认为我们需要做的计划,我们认为这些都是AWS和Google的所有顺序云正在为支持所有客户而努力。所以,好吧,这就像是一个相对计算密集的事情。

随着时间的推移,你会假设你会得到更多的计算。因此,计算效率较低的分散事物将难以对付,它们更难以进行计算,但最终可能有计算资源来执行此操作。我认为更有趣的问题在短期内不可行,但是像这样的系统的良好性的哲学问题。

所以,有一个问题,如果你愿意 – 我们可以进入分散化,我一直在考虑的事情之一是使用我可能感兴趣的区块链 – 尽管我还没想到实现这一目标的方法是围绕身份验证和提供 – 并基本上授予对您的信息和不同服务的访问权限。所以,基本上,用完全分布的东西取代我们对Facebook Connect的概念。

Jonathan Zittrain: “你想用你的Facebook帐户登录吗?”是现状

马克·扎克伯格: 基本上,你把你的信息,你存储在一些分散的系统,你可以选择是否登录到不同的地方而你不是通过中间人,这有点像你在这里建议的 –

Jonathan Zittrain: 是。

马克·扎克伯格: – 从某种意义上说。好的,现在,我认为有很多东西对我来说很有吸引力。您知道,对于开发人员而言,使用我们的系统或Google的系统或通过Apple的应用程序商店提供服务真正令人不安的事情之一就是您不希望在为您提供服务之间设置中介 – 使用您服务的人和您,对,有人可以说,“嘿,我们作为开发人员必须遵守您的政策,如果我们不这样做,那么您可以切断对我们服务的人的访问权限“这是一个困难和令人不安的立场。我认为开发人员 –

Jonathan Zittrain: – 你指的是最近发生的事件。

马克·扎克伯格: 不,好吧,我当时很好

马克·扎克伯格: 但我认为它强调了 – 我认为每个开发人员都可能会这样:人们正在使用任何应用程序商店,但也登录Facebook,与谷歌;任何这些服务,您都希望与您所服务的人员建立直接关系。

Jonathan Zittrain: 是。

马克·扎克伯格: 现在,好的,但让我们来看看另一面。因此,我们在过去几年中看到的剑桥Analytica基本上是一个例子,人们选择获取数据,他们 – 其中一些是他们的数据,其中一些是他们从朋友那里看到的数据,对吧?因为如果你想做一些事情,比如制作它,那么替代服务可以建立一个竞争的新闻源,那么你需要能够做到这一点,以便人们可以带来他们看到你的数据 [ph?] 在系统内。好吧,基本上,人们选择将他们的数据提供给与剑桥大学有关联的开发人员,剑桥大学是一个非常受尊敬的机构,然后开发人员转而将数据出售给Cambridge Analytica公司,这违反了我们的政策。因此,我们切断了开发人员的访问权限。当然,在完全分布式系统中,没有人可以切断开发人员的访问权限。 So, the question is if you have a fully distributed system, it dramatically empowers individuals on the one hand, but it really raises the stakes and it gets to your questions around, well, what are the boundaries on consent and how people can really actually effectively know that they’re giving consent to an institution?

In some ways it’s a lot easier to regulate and hold accountable large companies like Facebook or Google, because they’re more visible, they’re more transparent than the long tail of services that people would chose to then go interact with directly. So, I think that this is a really interesting social question. To some degree I think this idea of going in the direction of blockchain authentication is less gated on the technology and capacity to do that. I think if you were doing fully decentralized Facebook, that would take massive computation, but I’m sure we could do fully decentralized authentication if we wanted to. I think the real question is do you really want that?

Jonathan Zittrain: 是。

Mark Zuckerberg: 对? And I think you’d have more cases where, yes, people would be able to not have an intermediary, but you’d also have more cases of abuse and the recourse would be much harder.

Jonathan Zittrain: 是。 What I hear you saying is people as they go about their business online are generating data about themselves that’s quite valuable, if not to themselves, to others who might interact with them. And the more they are empowered, possibly through a distributed system, to decide where that data goes, with whom they want to share it, the more they could be exposed to exploitation. this is a genuine dilemma–

Mark Zuckerberg: Yeah, yeah.

Jonathan Zittrain: –because I’m a huge fan of decentralization.

Mark Zuckerberg: Yeah, yeah.

Jonathan Zittrain: But I also see the problem. And maybe one answer is there’s some data that’s just so toxic there’s no vessel we should put it in; it might eat a whole through it or something, metaphorically speaking. But, then again, innocuous data can so quickly be assembled into something scary. So, I don’t know if the next election–

Mark Zuckerberg: 是啊。 [ph?] I mean, I think in general we’re talking about the large-scale of data being assembled into meaning something different from what the individual data points mean.

Jonathan Zittrain: 是。

Mark Zuckerberg: And I think that’s the whole challenge here. But I philosophically agree with you thatI mean, I want to think about the– like, I do think about the work that we’re doing as a decentralizing force in the world, right? A lot of the reason why I think people of my generation got into technology is because we believe that technology gives individuals power and isn’t massively centralizing. Now you’ve built a bunch of big companies in the process, but I think what has largely happened is that individuals today have more voice, more ability to affiliate with who they want, and stay connected with people, ability to form communities in ways that they couldn’t before, and I think that’s massively empowering to individuals and that’s philosophically kind of the side that I tend to be on. So, that’s why I’m thinking about going back to decentralized or blockchain authentication. That’s why I’m kind of bouncing around how could you potentially make this work, because from my orientation is to try to go in that direction.

Jonathan Zittrain: 是。

Mark Zuckerberg: An example where I think we’re generally a lot closer to going in that direction is encryption. I mean, this is, like, one of the really big debates today is basically what are the boundaries on where you would want a messaging service to be encrypted. And there are all these benefits from a privacy and security perspective, but, on the other hand, if what we’re trying to do– one of the big issues that we’re grappling with content governance and where is the line between free expression and, I suppose, privacy on one side, but safety on the other as people do really bad things, right, some of the time. And I think people rightfully have an expectation of us that we’re going to do everything we can to stop terrorists from recruiting people or people from exploiting children or doing different things. And moving in the direction of making these systems more encrypted certainly reduces some of the signals that we would have access to be able to do some of that really important work.

But here we are, right, we’re sitting in this position where we’re running WhatsApp, which is the largest end-to-end encrypting service in the world; we’re running messenger, which is another one of the largest messaging systems in the world where encryption is an option, but it isn’t the default. I don’t think long term it really makes sense to be running different systems with very different policies on this. I think this is sort of a philosophical question where you want to figure out where you want to be on it. And, so, my question for you– now,

I’ll talk about how I’m thinking about this– is all right, if you were in my position and you got to flip a switch is probably too glib, because there’s a lot of work that goes into this, and go in one direction for both of those services, who would you think about that?

Jonathan Zittrain: Well, the question you’re putting on the table, which is a hard one is “Is it okay,” and let’s just take the simple case, “for two people to communicate with each other in a way that makes it difficult for any third party to casually listen in?” Is that okay? And I think that the way we normally answer that question is kind of a form of what you might call status quo-ism, which is not satisfying. It’s whatever has been the case is—

Mark Zuckerberg: Yeah, yeah.

Jonathan Zittrain: –whatever has been the case is what should stay the case.

Mark Zuckerberg: 是啊。

Jonathan Zittrain: And, so, for WhatsApp, it’s like right now WhatsApp, as I understand it, you could correct me if I’m wrong, is pretty hard to get into if–

Mark Zuckerberg: It’s fully end-to-end encrypted.

Jonathan Zittrain: Right. So, if Facebook gets handed a subpoena or a warrant or something from name-your-favorite-country–

Mark Zuckerberg: 是啊。

Jonathan Zittrain: –and you’re just like, “Thank you for playing. We have nothing to–”

Mark Zuckerberg: Oh, yeah, we’ve had employees thrown in jail because we have gotten court orders that we have to turnover data that we wouldn’t probably anyway, but we can’t because it’s encrypted.

Jonathan Zittrain: 是。 And then, on the other hand, and this is not as clean as it could be in theory, but Messenger is sometimes encrypted, sometimes not. If it doesn’t happen to have been encrypted by the users, then that subpoena could work and, more than that, there could start to be some automated systems either on Facebook’s own initiative or under pressure from governments in the general case, not a specific warrant, to say, “Hey, if the following phrases appear, if there’s some telltale that says, “This is somebody going after a kid for exploitation,” it should be forwarded up. If that’s already happening and we can produce x-number of people who have been identified and a number of crimes averted that way, who wants to be the person to be like, “Lock it down!” Like, “We don’t want any more of that!” But I guess, to put myself now to your question, when I look out over years rather than just weeks or months, the ability to casually peek at any conversation going on between two people or among a small group of people or even to have a machine do it for you, so, you can just set your alert list, you know, crudely speaking, and get stuff back, that– it’s always trite to call something Orwellian, but it makes Orwell look like a piker. I mean, it seems like a classic case where you– the next sentence would be “What could possible go wrong?”

Jonathan Zittrain: And we can fill that in! And it does mean, though, I think that we have to confront the fact that if we choose to allow that kind of communication, then there’s going to be crimes unsolved that could’ve been solved. There’s going to be crimes not prevented that could have been prevented. And the only thing that kind of blunts it a little is it is not really all or nothing. The modern surveillance states of note in the world, have a lot of arrows in their quivers. And just being able to darken you door and demand surveillance of a certain kind, that might be a first thing they would go to, but they’ve got a Plan B, and Plan C, and a Plan D. And I guess it really gets to what’s your threat model? If you think everybody is kind of a threat, think about the battles of copyright 15 years ago. Everybody is a potential infringer. All they have to do is fire up Napster, then you’re wanting some massive technical infrastructure to prevent the bad thing. If what you’re thinking is instead, they are a few really bad apples and they tend to– when they congregate online or otherwise with one another– tend to identify themselves and then we might have to send somebody near their house to listen with a cup at the window, metaphorically speaking. That’s a different threat model and [sic] might not need it.

Mark Zuckerberg: 是啊。

Jonathan Zittrain: Is that getting to an answer to your question?

Mark Zuckerberg: Yeah, and I think I generally agree. I mean, I’ve already said publically that my inclination is to move these services in the direction of being all encrypted, at least the private communication version. I basically think if you want to kind of talk in metaphors, messaging is like people’s living room, right? And I think we– you know, we definitely don’t want a society where there’s a camera in everyone’s living room watching the content of those conversations.

Jonathan Zittrain: Even as we’re now– I mean, it is 2019, people are happily are putting cameras in their living rooms.

Mark Zuckerberg: That’s their choice, but I guess they’re putting cameras in their living rooms, well, for a number of reasons, but–

Jonathan Zittrain: And Facebook has a camera that you can go into your living room- Mark Zuckerberg: That is, I guess–

Jonathan Zittrain: I just want to be clear.

Mark Zuckerberg: Yeah, although that would be encrypted in this world.

Jonathan Zittrain: Encrypted between you and Facebook!

Mark Zuckerberg: 不不不。 I think– but it also–

Jonathan Zittrain: Doesn’t it have like a little Alexa functionality, too?

Mark Zuckerberg: Well, Portal works over Messenger. So, if we go towards encryption on Messenger, then that’ll be fully encrypted, which I think, frankly, is probably what people want.

Jonathan Zittrain: 是啊。

Mark Zuckerberg: The other model, beside the living room is the town square and that, I think, just has different social norms and different policies and norms that should be at play around that. But I do think that these things are very different.对? You’re not going to– you may end up in a world where the town square is a fully decentralized or fully encrypted thing, but it’s not clear what value there is in encrypting something that’s public content anyway, or very broad.

Jonathan Zittrain: But, now, you were put to it pretty hard in that as I understand it there’s now a change to how WhatsApp works, that there’s only five forwards permitted.

Mark Zuckerberg: Yeah, so, this is a really interesting point, right? So, when people talk about how encryption will darken some of the signals that we’ll be able to use, you know, both for potentially providing better services and for preventing harm. One of the– I guess, somewhat surprising to me, findings of the last couple of years of working on content governance and enforcement is that it often is much more effective to identify fake accounts and bad actors upstream of them doing something bad by patterns of activity rather than looking at the content.

Jonathan Zittrain: So-called meta data.

Mark Zuckerberg: Sure.

Jonathan Zittrain: “I don’t know what they’re saying, but here’s who’s they’re calling” kind of thing.

Mark Zuckerberg: Yeah, or just like they– this account doesn’t seem to really act like a person, right?

And I guess as AI gets more advanced and you build these adversarial networks or generalized adversarial networks, you’ll get to a place where you have Ai that can probably more effectively

Jonathan Zittrain: Go under mimic [ph?] cover. Mimic act like another person–

Mark Zuckerberg: –for a while.

Mark Zuckerberg: 是啊。 But, at the same time, you’ll be building up contrary AI on the other side, but is better at identifying AIs that are doing that. But this has certainly been the most effective tactic across a lot of the areas where we’ve needed to focus to preventing harm. You know, the ability to identify fake accounts, which, like, a huge amount of the– under any category of issue that you’re talking about, a lot of the issues downstream come from fake accounts or people who are clearly acting in some malicious or not normal way. You can identify a lot of that without necessarily even looking at the content itself. And if you have to look at a piece of content, then in some cases, you’re already late, because the content exists and the activity has already happened. So, that’s one of the things that makes me feel like encryption for these messaging services is really the right direction to go, because you’re– it’s a very proprivacy and per security move to give people that control and assurance and I’m relatively confident that even though you are losing some tools to– on the finding harmful content side of the ledger, I don’t think at the end of the day that those are going to end up being the most important tools

Jonathan Zittrain: 是。

Mark Zuckerberg: –for finding the most of the–

Jonathan Zittrain: But now connect it up quickly to the five forwards thing.

Mark Zuckerberg: Oh, yeah, sure. So, that gets down to if you’re not operating on a piece of content directly, you need to operate on patterns of behavior in the network. And what we, basically found was there weren’t that many good uses for people forwarding things more than five times except to basically spam or blast stuff off. It was being disproportionately abused. So, you end up thinking about different tactics when you’re not operating on content specifically; you end up thinking about patterns of usage more.

Jonathan Zittrain: Well, spam, I get and that– I’m always in favor of things that reduce spam. However, you could also say the second category was just to spread content. You could have the classic, I don’t know, like Les Mis, or Paul Revere’s ride, or Arab Spring-esque in the romanticized vision of it: “Gosh, this is a way for people to do a tree,” and pass along a message that “you can’t stop the signal,” to use a Joss Whedon reference. You really want to get the word out. This would obviously stop that, too.

Mark Zuckerberg: Yeah, and then I think the question is you’re just weighing whether you want this private communication tool where the vast majority of the use and the reason why it was designed was the vast majority of just one-on-one; there’s a large amount of groups that people communicate into, but it’s a pretty small edge case of people operating this with, like– you have a lot of different groups and you’re trying to organize something and almost hack public content-type or public sharing- type utility into an encrypted space and, again, there I think you start getting into “Is this the living room or is this the town square?” And when people start trying to use tools that are designed for one thing to get around what I think the social norms are for the town square, that’s when I think you probably start to have some issues. This is not– we’re not done addressing these issues. There’s a lot more to think through on this

Jonathan Zittrain: 是啊。

Mark Zuckerberg: –but that’s the general shape of the problem that at least I perceive from the work that we’re doing.

Jonathan Zittrain: Well, without any particular segue, let’s talk about fake news.

Jonathan Zittrain: So, insert your favorite segue here. There’s some choice or at least some decision that gets made to figure out what’s going to be next in my newsfeed when I scroll up a little more.

Mark Zuckerberg: Mm-hm.

Jonathan Zittrain: And in the last conversation bit, we were talking about how much we’re looking at content versus telltales and metadata, things that surround the content.

Mark Zuckerberg: 是啊。

Jonathan Zittrain: For knowing about what that next thing in the newsfeed should be, is it a valid desirable material consideration, do you think, for a platform like Facebook to say is the thing we are about to present true, whatever true means?

Mark Zuckerberg: Well, yes, because, again, getting at trying to serve people, people tell us that they don’t want fake content. Right. I mean, I don’t know anyone who wants fake content. I think the whole issue is, again, who gets to decide. Right. So broadly speaking, I don’t know any individual who would sit there and say, “Yes, please show me things that you know are false and that are fake.” People want good quality content and information. That said, I don’t really think that people want us to be deciding what is true for them and people disagree on what is true. And, like, truth is, I mean, there are different levels of when someone is telling a story, maybe the meta arc is talking about something that is true but the facts that were used in it are wrong in some nuanced way but, like, it speaks to some deeper experience. Well, was that true or not? And do people want that disqualified from to them? I think different people are going to come to different places on this.

Now, so I’ve been very sensitive, which, on, like, we really want to make sure that we’re showing people high quality content and information. We know that people don’t want false information. So we’re building quite advanced systems to be able to– to make sure that we’re emphasizing and showing stuff that is going to be high quality. But the big question is where do you get the signal on what the quality is? So the kind of initial v.1 of this was working with third party fact checkers.

Right, I believe very strongly that people do not want Facebook and that we should not be the arbiters of truth in deciding what is correct for everyone in the society. I think people already generally think that we have too much power in deciding what content is good. I tend to also be concerned about that and we should talk about some of the governance stuff that we’re working on separately to try to make it so that we can bring more independent oversight into that.

Jonathan Zittrain: 是。

Mark Zuckerberg: But let’s put that in a box for now and just say that with those concerns in mind, I’m definitely not looking to try to take on a lot more in terms of also deciding in addition to enforcing all the content policies, also deciding what is true for everyone in the world. Okay, so v.1 of that is we’re going to work with–

Jonathan Zittrain: Truth experts.

Mark Zuckerberg: We’re working with fact checkers.

Jonathan Zittrain: 是啊。

Mark Zuckerberg: And, and they’re experts and basically, there’s like a whole field of how you go and assess certain content. They’re accredited. People can disagree with the leaning of some of these organizations.

Jonathan Zittrain: Who does accredited fact checkers?

Mark Zuckerberg: The Poynter Institute for Journalism.

Jonathan Zittrain: I should apply for my certification.

Mark Zuckerberg: You may.

Jonathan Zittrain: Okay, good.

Mark Zuckerberg: You’d probably get it, but you have to– You’d have to go through the process.

Mark Zuckerberg: The issue there is there aren’t enough of them, right. So there’s a large content. There’s obviously a lot of information is shared every day and there just aren’t a lot of fact checkers. So then the question is okay, that is probably

Jonathan Zittrain: But the portion– You’re saying the food is good, it’s just the portions are small. But the food is good.

Mark Zuckerberg: I think in general, but so you build systems, which is what we’ve done especially leading up to elections where I think are some of the most fraught times around this where people really are aggressively trying to spread misinformation.

Jonathan Zittrain: 是。

Mark Zuckerberg: You build systems that prioritize content that seems like it’s going viral because you want to reduce the prevalence of how widespread the stuff gets, so that way the fact checkers have tools to be able to, like, prioritize what they need to go– what they need to go look at. But it’s still getting to a relatively small percent of the content. So I think the real thing that we want to try to get to over time is more of a crowd sourced model where people, it’s not that people are trusting some sort, some basic set of experts who are accredited but are in some kid of lofty institution somewhere else. It’s like do you trust, yeah, like, if you get enough data points from within the community of people reasonably looking at something and assessing it over time, then the question is can you compound that together into something that is a strong enough signal that we can then use that?

Jonathan Zittrain: Kind of in the old school like a slash-dot moderating system

Mark Zuckerberg: 是啊。

Jonathan Zittrain: With only the worry that if the stakes get high enough, somebody wants to Astroturf that.

Mark Zuckerberg: 是。

Jonathan Zittrain: I’d be–

Mark Zuckerberg: There are a lot of questions here, which is why I’m not sitting here and announcing a new program.

Mark Zuckerberg: But what I’m saying is this is, like,–

Jonathan Zittrain: Yeah,

Mark Zuckerberg: This is the general direction that I think we should be thinking about when we haveand I think that there’s a lot of questions and–

Jonathan Zittrain: 是。

Mark Zuckerberg: And we’d like to run some tests in this area to see whether this can help out. Which would be upholding the principles which are that we want to stop–

Jonathan Zittrain: 是。

Mark Zuckerberg: The spread of misinformation.

Jonathan Zittrain: 是。

Mark Zuckerberg: Knowing that no one wants misinformation. And the other principle, which is that we do not want to be arbiters of truth.

Jonathan Zittrain: Want to be the decider, yes.

Mark Zuckerberg: And I think that that’s the basic– those are the basic contours I think of that, of that problem.

Jonathan Zittrain: So let me run an idea by you that you can process in real time and tell me the eight reasons I have not thought of why this is a terrible idea. And that would be people see something in their Facebook feed. They’re about to share it out because it’s got a kind of outrage factor to it. I think of the classic story from two years ago in The Denver Guardian about “FBI agent suspected in Hilary Clinton email leak implicated in murder-suicide.” I have just uttered fake news.

None of that was true if you clicked through The Denver Guardian. There was just that article. There is Denver Guardian. If you live in Denver, you cannot subscribe. Like, it is unambiguously fake. And it was shared more times than the most shared story during the election season of The Boston Globe. And so

Mark Zuckerberg: So, and this is actually an example, by the way, of where trying to figure out fake accounts is a much simpler solution.

Jonathan Zittrain: 是。

Mark Zuckerberg: Than trying to down–

Jonathan Zittrain: So if newspaper has one article–

Mark Zuckerberg: 是啊。

Jonathan Zittrain: Wait for ten more before you decide they’re a newspaper.

Mark Zuckerberg: 是啊。 Or, you know, I mean, it’s there are any number of systems that you could build to basically detect, “Hey, this is–”

Jonathan Zittrain: A Potemkin.

Mark Zuckerberg: This is a fraudulent thing.

Jonathan Zittrain: 是。

Mark Zuckerberg: And then you can take that down. And again, that ends up being a much less controversial decision because you’re doing it upstream based on the basis of inauthenticity.

Jonathan Zittrain: 是。

Mark Zuckerberg: In a system where people are supposed to be their real and represent that they’re their real selves than downstream, trying to say, “Hey, is this true or false?”

Jonathan Zittrain: I made a mistake in giving you the easy case.

Mark Zuckerberg: Okay.

Jonathan Zittrain: So I should have not used that example.

Mark Zuckerberg: Too simple.

Jonathan Zittrain: You’re right and you knocked that one out of the park and, like, Denver Guardian, come up with more articles and be real and then come back and talk to us.

Jonathan Zittrain: So, here’s the harder case which is something that might be in an outlet that is, you know, viewed as legitimate, has a number of users, et cetera. So you can’t use the metadata as easily.

Imagine if somebody as they shared it out could say, “By the way, I want to follow this. I want to learn a little bit more about this.” They click a button that says that. And I also realized when I talked earlier to somebody at Facebook on this that adding a new button to the homepage is, like, everybody’s first idea

Mark Zuckerberg: Oh, yeah.

Jonathan Zittrain: And it’s–

Mark Zuckerberg: But it’s a reasonable thought experiment, even though it would lead to a very bad UI.

Jonathan Zittrain: Fair enough. I understand this is already–

Mark Zuckerberg: 是啊。

Jonathan Zittrain: In the land of fantasy. So they add the button. They say, “I want to follow up on this.”

If enough people are clicking comparatively on the same thing to say, “I want to learn more about this. If anything else develops, let me know, Facebook,” that, then, if I have my pneumatic tube, it then goes to a convened virtually panel of three librarians. We go to the librarians of the nation and the world at public and private libraries across the land who agree to participate in this program. Maybe we set up a little foundation for it that’s endowed permanently and no long connected to whoever endowed it. And those librarians together discuss the piece and they come back with what they would tell a patron if somebody came up to them and said, “I’m about to cite this in my social studies paper. What do you think?” And librarians, like, live for questions like that.

Mark Zuckerberg: Mm-hmm, yeah.

Jonathan Zittrain: They’re like, “Wow. Let us tell you.” And they have a huge fiduciary notion of patron duty that says, “I may disapprove of you even studying this, whatever, but I’m here to serve you, the user.”

Mark Zuckerberg: 是啊。

Jonathan Zittrain: “And I just think you should know, this is why maybe it’s not such a good source.” And when they come up with that they can send it back and it gets pushed out to everybody who asks for follow-up–

Mark Zuckerberg: 是啊。

Jonathan Zittrain: And they can do with it as they will. And last piece of the puzzle, we have high school students who apprentice as librarian number three for credit.

Jonathan Zittrain: And then they can get graded on how well they participated in this exercise which helps generate a new generation of librarian-themed people who are better off at reading things, so.

Mark Zuckerberg: All right, well, I think you have a side goal here which I haven’t been thinking about on the librarian thing.

Mark Zuckerberg: Which is the evil goal of promoting libraries.

Jonathan Zittrain: Well, it’s

Mark Zuckerberg: No, but I mean, look, I think solving– preventing misinformation or spreading misinformation is hard enough without also trying to develop high school students in a direction.

Jonathan Zittrain: Ah. My colleague Charlies Foote–

Mark Zuckerberg: So, that’s solving a problem with a problem.

Jonathan Zittrain: 行。 Well, anyway, yes.

Mark Zuckerberg: So I actually think I agree with most of what you have in there. It doesn’t need to be a button on the home page, it can be– I mean, it turns out that there’s so many people using these services that even if you get– even if you put something that looks like it’s not super prominent, like, behind the three dots on a given newsfeed story, you have the options, yeah, you’re not– not everyone is going tois going to like something.

Jonathan Zittrain: If 1 out of 1000 do it, you still get 10,000 or 100,000 people, yeah.

Mark Zuckerberg: You get pretty good signal. But I actually think you could do even better, which is, it’s not even clear that you need that signal. I think that that’s super helpful. I think really what matters is looking at stuff that’s getting a lot of distribution. So, you know, I think that there’s kind of this notion, and I’m going back to the encryption conversation, which is all right, if I say something that’s wrong to you in a one-on-one conversation, I mean, does that need to be fact checked? I mean, it’s, yeah, it would be good if you got the most accurate information.

Jonathan Zittrain: I do have a personal librarian to accompany me for most conversations, yes. There you go.

Mark Zuckerberg: Well, you are–

Jonathan Zittrain: Unusual.

Mark Zuckerberg: Yeah, yeah.是。

Mark Zuckerberg: That’s the word I was looking for.

Jonathan Zittrain: I’m not sure I believe you, but yes.

Mark Zuckerberg: It’s– But I think that there’s limited– I don’t think anyone would say that every message that goes back and forth in especially an encrypted messaging service should be

Jonathan Zittrain: Fact checked.

Mark Zuckerberg: Should be fact checked.

Jonathan Zittrain: Correct.

Mark Zuckerberg: So I think the real question is all right, when something starts going viral or getting a lot of distribution, that’s when it becomes most socially important for it to be– have some level of validation or at least that we know where that the community in general thinks that this is a reasonable thing. So it’s actually, while it’s helpful to have the signal of whether people are flagging this as something that we should look at, I actually think increasingly you want to be designing systems that just prevent like alarming or sensational content from going viral in the first place. And making sure that that, that the stuff that is getting wide distribution is doing so because it’s high quality on whatever front you care about. So then, okay–

Jonathan Zittrain: And that quality is still generally from Poynter or some external party that

Mark Zuckerberg: Well, well quality has many dimensions.

Jonathan Zittrain: 是啊。

Mark Zuckerberg: But certainly accuracy is one dimension of it. You also, I mean, you pointed out I think in one of your questions, is this piece of content prone to incite outrage. If you don’t mind, I’ll get to your panel of three things in a second, but as a slight detour on this.

Jonathan Zittrain: 是。

Mark Zuckerberg: One of the findings that has been quite interesting is, you know, there’s this question about whether social media in general increases, basically makes it so that sensationalist content gets the most distribution. And what we’ve found is that, all right, so we’re going to have rules, right, about what content is allowed. And what we found is that generally within whatever rules you set up, as content approaches the line of what is allowed, it often gets more distribution. So if you’ll have some rule on, you know, what– And take a completely different example and our nudity policies. Right. It’s like, okay, you have to define what is unacceptable nudity in some way. As you get as close to that as possible it’s like, all right. Like, this is maybe a photo of someone–

Jonathan Zittrain: The skin to share ratio goes up until it gets banned at which point it goes to zero.

Mark Zuckerberg: 是。 Okay. So that is a bad property of a system, right, that I think you want to generally address. Or you don’t want to design a community where or systems for helping to build a community where things that get as close to the line as what is bad get the most distribution.

Jonathan Zittrain: So long as we have the premise, which in many cases is true, but I could probably try to think of some where it wouldn’t be true, that as you near the line, you are getting worse.

Mark Zuckerberg: That’s a good point. That’s a good point. There’s–

Jonathan Zittrain: You know, there might be humor that’s really edgy.

Mark Zuckerberg: That’s true.

Jonathan Zittrain: And that conveys a message that would be impossible to convey without the edginess, while not still–

Mark Zuckerberg: That is–

Jonathan Zittrain: But, I–

Mark Zuckerberg: That’s true.

Jonathan Zittrain: 是啊。

Mark Zuckerberg: So but then you get the question of what’s the cost benefit of allowing that. And obviously, where you can accurately separate what’s good and bad which you, like in the case of misinformation I’m not sure you could do it fully accurately, but you can try to build systems that approximate that, there’s certainly the issue, which is that, I mean, there is misinformation which leads to massive public harm, right. So if it’s misinformation that is also spreading hate and leading to genocide or public attacks or, it’s like, okay, we’re not going to allow that. Right. That’s coming down. But then generally if you say something that’s wrong, we’re not going to try to block that.

Jonathan Zittrain: 是。

Mark Zuckerberg: We’re just going to try to not show it to people widely because people don’t want content that is wrong. So then the question is as something is approaching the line, how do you assess that? This is a general theme in a lot of the content governance and enforcement work that we’re doing, which is there’s one piece of this which is just making sure that we can as effectively as possible enforce the policies that exist. Then there’s a whole other stream of work, which I called borderline content, which is basically this issue of as content approaches the line of being against the policies, how do you make sure that that isn’t the content that is somehow getting the most distribution? And a lot of the things that we’ve done in the last year were focused on that problem and it really improves the quality of the service and people appreciate that.

Jonathan Zittrain: So this idea would be stuff that you’re kind of letting down easy without banning and letting down easy as it’s going to somehow have a coefficient of friction for sharing that goes up. It’s going to be harder–

Mark Zuckerberg: 是啊。

Jonathan Zittrain: For it to go viral.

Mark Zuckerberg: 是啊。

Jonathan Zittrain: And–

Mark Zuckerberg: So it’s fascinating because it’s just against– Like, you can take almost any category of policy that we have, so I used nudity a second ago. You know, gore and violent imagery.

Jonathan Zittrain: 是。

Mark Zuckerberg: Hate speech.

Jonathan Zittrain: 是啊。

Mark Zuckerberg: Any of these things. I mean, there’s, like, hate speech, there’s content that you would just say is mean or toxic, but that did not violate– But that you would not want to have a society that banned being able to say that thing. But it’s, but you don’t necessarily want that to be the content that is getting the most distribution.

Jonathan Zittrain: So here’s a classic transparency question around exactly that system you described.

And when you described this, I think you did a post around this a few months ago. This was fascinating.

You had graphs in the post depicting this, which was great. How would you feel about sharing back to the person who posted or possibly to everybody who encounters it its coefficient of friction? Would that freak people out? Would it be, like, all right, I– And in fact, they would then probably start conforming their posts, for better or worse,–

Mark Zuckerberg: 是啊。

Jonathan Zittrain: To try to maximize the sharability. But that rating is already somewhere in there by design. Would it be okay to surface it?

Mark Zuckerberg: So, as a principle, I think that that would be good, but I don’t– The way that the systems are designed isn’t that you get a score of how inflammatory or sensationalist a piece of content is. The way that it basically works is you can build classifiers that identify specific types of things. Right.

So we’re going down the list of, like, all right, there’s 20 categories of harmful content that you’re trying to identify. You know, everything from terrorist propaganda on the one hand to self-harm issues to hate speech and election interference. And basically, each of these things while it uses a lot of the same underlying machine learning infrastructure, you’re doing specific work for each of them. So if you go back to the example on Nudity for a second, you know, what you– you’re not necessarily scoring everything on a scale of not at all nude to nude.您[‘rebasicallyenforcingspecificpoliciesSoyouknowyou’resaying“Okayif–”[‘rebasicallyenforcingspecificpoliciesSoyouknowyou’resaying“Okayif–”

Jonathan Zittrain: So by machine learning it would just be give me an estimate of the odds by which if a human looked at it who was employed to enforce policy–

Mark Zuckerberg: Well, basically–

Jonathan Zittrain: Whether it violates the policy.

Mark Zuckerberg: And you have a sense of, okay, this is– So what are the things that are adjacent to the policy, right? So you night say, okay, well, if the person is completely naked, that is something that you can definitely build a classifier to be able to identify with relatively high accuracy. But even if they’re not, you know, then the question is you kind of need to be able to qualitatively describe what are the things that are adjacent to that. So maybe the person is wearing a bathing suit and is in a sexually suggestive position. Right. It’s not like any piece of content you’re going to score from not at all nude to nude. But you kind of have the cases for what you think are adjacent to the issues and, and again, you ground this and qualitatively, people, like, people might click on it, they might engage with it, but at the end, they don’t necessarily feel good about it. And you want to get at when you’re designing these systems not just what people do, but also you want to make sure we factor in, too, like is this the content that people say that they really want to be seeing? Do they–?

Jonathan Zittrain: The constitutional law, there’s a formal kind of definition that’s emerged for the word “prurient.” If something appeals to the prurient interest–

Mark Zuckerberg: Okay.

Jonathan Zittrain: As part of a definition of obscenity, the famous Miller test, which was not a beeroriented test. And part of a prurient interest is basically it excites me and yet it completely disgusts me.

And it sounds like you’re actually converging to the Supreme Court’s vision of prurience with this.

Mark Zuckerberg: 也许。

Jonathan Zittrain: And it might be– Don’t worry, I’m not trying to nail you down on that. But it’s very interesting that machine learning, which you invoked, is both really good, I gather, at something like this.

It’s the kind of thing that’s like just have some people tell me with their expertise, does this come near to violating the policy or not and I’ll just through a Spidey sense start to tell you whether it would.

Mark Zuckerberg: Mm-hmm.

Jonathan Zittrain: Rather than being able to throw out exactly what the factors are. I know the person’s fully clothed, but it still is going to invoke that quality. So all of the benefits of machine learning and all of, of course, all the drawbacks where it classifies something and somebody’s like, “Wait a minute. That was me doing a parody of blah, blah, blah.” That all comes to the fore.

Mark Zuckerberg: Yeah and I mean, when you ask people what they want to see in addition to looking at what they actually engage with, you do get a completely different sense of what people value and you can build systems that approximate that. But going back to your question, I think rather than giving people a score of the friction–

Jonathan Zittrain: 是。

Mark Zuckerberg: I think you can probably give people feedback of, “Hey, this might make people uncomfortable in this way, in this specific way.” And this fits your–

Jonathan Zittrain: It might affect how much it gets– how much it gets shared.

Mark Zuckerberg: 是啊。 And this gets down to a different– There’s a different AI ethics question which I think is really important here, which is designing AI systems to be understandable by people

Jonathan Zittrain: Right.

Mark Zuckerberg: Right and to some degree, you don’t just want it to spit out a score of how offensive or, like, where it scores on any given policy. You want it to be able to map to specific things that might be problematic.

Jonathan Zittrain: 是。

Mark Zuckerberg: And that’s the way that we’re trying to design the systems overall.

Jonathan Zittrain: 是。 Now we have something parked in the box we should take out, which is the external review stuff. But before we do, one other just transparency thing maybe to broach. It basically just occurred to me, I imagine it might be possible to issue me a score of how much I’ve earned for Facebook this year. It could simply say, “This is how much we collected on the basis of you in particular being exposed to an ad.” And I know sometimes people, I guess, might compete to get their numbers up. But I’m just curious, would that be a figure? I’d kind of be curious to know, in part because it might even lay the groundwork of being like, “Look, Mark, I’ll double it. You can have double the money and then don’t show me any ads.” Can we get a car off of that lot today?

Mark Zuckerberg: Okay, well, there’s a lot–

Mark Zuckerberg: There’s a lot in there.

Jonathan Zittrain: It was a quick question.

Mark Zuckerberg: So there’s a question in what you’re saying which is so we built an ad-supported system. Should we have an option for people to pay to not see ads.

Jonathan Zittrain: Right.

Mark Zuckerberg: I think is kind of what you’re saying. I mean, just as the basic primer from first principles on this. You know, we’re building this service. We want to give everyone a voice. We want everyone to be able to connect with who they care about. If you’re trying to build a service for everyone,

Jonathan Zittrain: Got to be free. That’s just

Mark Zuckerberg: If you want them to use it, that’s just going to be the argument. Yes, yes.

Jonathan Zittrain: Okay.行。

Mark Zuckerberg: So then, so this is a kind of a tried and true thing. There are a lot of companies over time that have been ad supported. In general what we find is that if people are going to see ads, they want them to be relevant. They don’t want them to be junk. Right. So then within that you give people control over how their data is used to show them ads. But the vast majority of people say, like, show me the most relevant ads that you can because I get that I have to see ads. This is a free service. So now the question is, all right, so there’s a whole set of questions around that that we could get into, but but then

Jonathan Zittrain: For which we did talk about enough to reopen it, the personalization exploitation.

Mark Zuckerberg: 是啊。

Jonathan Zittrain: Or even just philosophical question. Right now, Uber or Lyft are not funded that way.

We could apply this ad model to Uber or Lyft, “Free rides. Totally free. It’s just every fifth ride takes you to Wendy’s and idles outside the drive through window.”

Jonathan Zittrain: “Totally up to you what you want to do, but you’re going to sit here for a while,” and then you go on your way. I don’t know how– and status quo-ism would probably say people would have a problem with that, but it would give people rides that otherwise wouldn’t get rides.

Mark Zuckerberg: I have not thought about that case in their–

Mark Zuckerberg: In their business, so, so–

Jonathan Zittrain: Well, that’s my patent, damn it, so don’t you steal it.

Mark Zuckerberg: But certainly some services, I think tend themselves better towards being ad supported than others.

Jonathan Zittrain: Okay.

Mark Zuckerberg: Okay and I think generally information-based ones tend to–

Jonathan Zittrain: Than my false imprisonment hypo, I’d– Okay, fair enough.

Mark Zuckerberg: I mean, that seems

Jonathan Zittrain: 是啊。

Mark Zuckerberg: There might be, you know, more– more issues there. But okay, but go to the subscription thing.

Jonathan Zittrain: 是。

Mark Zuckerberg: When people have questions about the ad model on Facebook, I don’t think the questions are just about the ad model, I think they’re about both seeing ads and data use around ads.

And the thing that I think, so when I think about this it’s, I don’t just think you want to let people pay to not see ads because I actually think then the question is the questions are around ads and data use and I don’t think people are going to be that psyched about not seeing ads but then not having different controls over how their data is used. Okay, but now you start getting into a principle question which is are we going to let people pay to have different controls on data use than other people. And my answer to that is a hard no, right. So the prerequisite–

Jonathan Zittrain: What’s an example of data use that isn’t ad-based, just so we know what we’re talking about?

Mark Zuckerberg: That isn’t ad-based?

Jonathan Zittrain: 是啊。

Mark Zuckerberg: Like what do you mean?

Jonathan Zittrain: You were saying, I don’t want to see ads. But you’re saying that’s kind of just the wax on the car. What’s underneath is how the data gets used.

Mark Zuckerberg: So, well, look– Maybe– let me keep going with this explanation and then I think this’ll be clear.

Jonathan Zittrain: Yeah, sure.

Mark Zuckerberg: So one of the things that we’ve been working on is this tool that we call clear history. And the basic idea is it is you can kind of analogize it to a web browser where you can clear your cookies. That’s kind of a normal thing. You know that when you clear your cookies you’re going to get logged out of a bunch of stuff. A bunch of stuff might get more annoying.

Jonathan Zittrain: Which is why my guess is, am I right, probably nobody clears their cookies.

Mark Zuckerberg: I don’t know.

Jonathan Zittrain: They might use incognito mode or something, but.

Mark Zuckerberg: I think– I don’t know. How many of you guys clear your cookies every once in a while, right?

Jonathan Zittrain: This is not a representative group, damn it.

Mark Zuckerberg: Okay. Like, maybe once a year or something I’ll clear my cookies.

Jonathan Zittrain:

Mark Zuckerberg: But no, it’s, I think–

Jonathan Zittrain: Happy New Year.

Mark Zuckerberg: No, over some period of time, all right, but–

Jonathan Zittrain: Yeah, okay.

Mark Zuckerberg: But not necessarily every day. But it’s important that people have that tool even though it might in a local sense make their experience worse.

Jonathan Zittrain: 是。

Mark Zuckerberg: Okay. So that kind of content of what different services, websites and apps send to Facebook that, you know, we use to help measure the ads in effectiveness there, right, so things like, you know, if you’re an app developer and you’re trying to pay for ads to help grow your app, we want to only charge you when we actually, when something that we show leads to an install, not just whether someone sees the ad or clicks on it, but if they add–

Jonathan Zittrain: That requires a whole infrastructure to, yeah.

Mark Zuckerberg: Okay, so then, yeah, so you build that out. It helps us show people more relevant ads.

It can help show more relevant content. Often a lot of these signals are super useful also on the security side for some of the other things that we’ve talked about, so that ends up being important. But fundamentally, you know, looking at the model today, it seems like you should have something like this ability to clear history. It turns out that it’s a much more complex technical project. I’d talked about this at our developer conference last year, about how I’d hoped that we’d roll it out by the end of 2018 and just, the plumbing goes so deep into all the different systems that it’s, that– But we’re still working on it and we’re going to do it. It’s just it’s taking a little bit longer.

Jonathan Zittrain: So clear history basically means I am as if a newb, I just show

Mark Zuckerberg: 是。

Jonathan Zittrain: Even though I’ve been using Facebook for a while, it’s as if it knows nothing about me and it starts accreting again.

Mark Zuckerberg: 是啊。

Jonathan Zittrain: And I’m just trying to think just as a plain old citizen, how would I make an informed judgment about how often to do that or when I should do it? What–?

Mark Zuckerberg: Well, hold on. Let’s go to that in a second.

Jonathan Zittrain: Okay.

Mark Zuckerberg: But one thing, just to connect the dots on the last conversation.

Jonathan Zittrain: 是啊。

Mark Zuckerberg: Clear history is a prerequisite, I think, for being able to do anything like subscriptions.

Right. Because, like, partially what someone would want to do if they were going to really actually pay for a not ad supported version where their data wasn’t being used in a system like that, you would want to have a control so that Facebook didn’t have access or wasn’t using that data or associating it with your account. And as a principled matter, we are not going to just offer a control like that to people who pay.

Right. That’s going to, if we’re going to give controls over data use, we’re going to do that for everyone in the community. So that’s the first thing that I think we need to go do.

Mark Zuckerberg: So that’s, so that’s kind of– This is sort of the how we’re thinking about the projects and this is a really deep and big technical project but we’re committed to doing it because I think it’s that’s what it’s there for. [ph?] +++

Jonathan Zittrain: And I guess like an ad block or somebody could then write a little script for your browser that would just clear your history every time you visit or something.

Mark Zuckerberg: Oh, yeah, no, but the plan would also be to offer something that’s an ongoing thing.

Jonathan Zittrain: I see.

Mark Zuckerberg: In your browser, but I think the analogy here is you kind of have, in your browser you have the ability to clear your cookies. And then, like, in some other place you have under your, like, nuclear settings, like, don’t ever accept any cookies in my browser. And it’s like, all right, your browser’s not really going to work that well.

Jonathan Zittrain: 是啊。

Mark Zuckerberg: But, but you can do that if you want because you should have that control. I think that these are part and parcel, right. It’s I think a lot of people might go and clear their history on a periodic basis because they– Or, or actually in the research that we’ve done on this as we’ve been developing it, the real thing that people have told us that they want is similar to cookie management, not necessarily wiping everything, because that ends in inconvenience of getting logged out of a bunch of things, but there are just certain services or apps that you don’t want that data to be connected to your Facebook account. So having the ability on an ad hoc basis to go through and say, “Hey, stop associating this thing,” is going to end up being a quite important thing that I think we want to try to deliver. So that’s, this is partially as we’re getting into this, it’s a more complex thing but I think it’s very valuable. And I think if any conversation around the– around subscriptions, I think you would want to start with giving people these, make sure that everyone has these kind of controls. So that’s, we’re kind of in the early phases of doing that. The philosophical downstream question of whether you also let people pay to not have ads, I don’t know. There were a bunch of questions around whether that’s actually a good thing, but I personally don’t believe that very many people would like to pay to not have ads. That all of the research that we have, it’s it may still end up being the right thing to offer that as a choice down the line, but all of the data that I’ve seen suggests that the vast, vast, vast majority of people want a free service and that the ads, in a lot of places are not even that different from the organic content in terms of the quality of what people are being able to see.

Jonathan Zittrain: 是啊。

Mark Zuckerberg: People like being able to get information from local businesses and things like that too, so. So there’s a lot of good there.

Jonathan Zittrain: 是啊。 Forty years ago it would have been the question of ABC versus HBO and the answer turned out to be yes.

Jonathan Zittrain: So you’re right. And people might have different things.

Mark Zuckerberg: 是啊。

Jonathan Zittrain: There’s a little paradox lingering in there about if something’s so important and vital that we wouldn’t want to deprive anybody of access to it but therefore nobody gets it until we figured out how to remove it for everybody.

Mark Zuckerberg: What we– [ph?] +++

Jonathan Zittrain: In other words, if I could buy my way out of ads and data collection it wouldn’t be fair to those who can’t and therefore we all subsist with it until the advances you’re talking about.

Mark Zuckerberg: Yeah, but I guess what I’m saying is on the data use, I don’t believe that that’s something that people should buy. I think the data principles that we have need to be uniformly available to everyone. That to me is a really important principle. It’s, like, maybe you could have a conversation about whether you should be able to pay and not see ads. That doesn’t feel like a moral question to me.

Jonathan Zittrain: 是。

Mark Zuckerberg: But the question of whether you can pay to have different privacy controls feels wrong. So that to me is something that in any conversation about whether we’d evolve towards having a subscription service, I think you have to have these controls first and it’s a very deep thing. A technical problem to go do, but we’re– that’s why we’re working through that.

Jonathan Zittrain: 是。 So long as the privacy controls that we’re not able to buy our way into aren’t controls that people ought to have. You know, it’s just the kind of underlying question of is the system as it is that we can’t opt out of a fair system. And that’s of course, you know, you have to go into the details to figure out what you mean by it. But let’s in the remaining time we have left

Mark Zuckerberg: How are we doing on time?

Jonathan Zittrain: We’re good. We’re 76 minutes in.

Mark Zuckerberg: All right, into–

Mark Zuckerberg: We’re going to get through maybe half the topics.

Jonathan Zittrain: Yeah, yeah, yeah.

Mark Zuckerberg: And I’ll come back and do another one later.

Jonathan Zittrain: I’m going to bring this in for a landing soon. On my agenda left includes such things as taking out of the box the independent review stuff, chat a little bit about that. I’d be curious, and this might be a nice thing, really, as we wrap up, which would be a sense of any vision you have for what would Facebook look like in 10 or 15 years and how different would it be than the Facebook of 10 years ago is compared to today. So that’s something I’d want to talk about. Is there anything big on your list that you want to make sure we talk about?

Mark Zuckerberg: Those are good. Those are good topics.

Jonathan Zittrain: Fair enough.

Mark Zuckerberg:

Jonathan Zittrain: So all right, the external review board.

Mark Zuckerberg: 是啊。 So one of the big questions that I have just been thinking about is, you know, we make a lot of decisions around content enforcement and what stays up and what comes down. And having gone through this process over the last few years of working on the systems, one of the themes that I feel really strongly about is that we shouldn’t be making so many of these decisions ourselves. You know, one of the ways that I try to reason about this stuff is take myself out of the position of being CEO of the company, almost like a Rawlsian perspective. If I was a different person, what would I want the CEO of the company to be able to do? And I would not want so many decisions about content to be concentrated with any individual. So–

Jonathan Zittrain: It is weird to see big impactful, to use a terrible word, decisions about what a huge swath of humanity does or doesn’t see inevitably handled as, like, a customer service issue. It does feel like a mismatch, which is what I hear you saying.

Mark Zuckerberg: So let’s, yeah, so I actually think the customer service analogy is a really interesting one. Right. So when you email Amazon, because they don’t, they make a mistake with your package, that’s customer support. Right. I mean, they are trying to provide a service and generally, they can invest more in customer support and make people happier. We’re doing something completely different, right.

When someone emails us with an issue or flags some content, they’re basically complaining about something that someone else in the community did. So it’s more like it’s almost more like a court system in that sense. Doing more of that does not make people happy because in every one of those transactions one person ends up the winner and one is the loser. Either you said that that content, that the content was fine, in which case the person complaining is upset, or you the someone’s content down, in which case the person is really upset because you’re now telling them that they don’t have the ability to express something that they feel is a valid thing that they should be able to express.

So in some deep sense while some amount of what we do is customer support, people get locked out of their account, et cetera, you know, we now have, like, more than 30,000 people working on content review and safety review, doing the kind of judgments that, you know, it’s basically a lot of the stuff, we have machine learning systems that flag things that could be problematic in addition to people in the community flagging things, but making these assessments of whether the stuff is right or not. So one of the questions that I just think about, it’s like, okay, well, you have many people doing this.

Regardless of how much training they have, we’re going to make mistakes, right. So you want to start building in principles around, you know, what you would kind of think of as due process, right. So we’re building in an ability to have an appeal, right, which already is quite good in that we are able to overturn a bunch of mistakes that the first line people make in making these assessments. But at some level I think you also want a level of kind of independent appeal, right, where if, okay, let’s say, so the appeals go to maybe a higher level of Facebook employee who is a little more trained in the nuances of the policies; but at some point, I think you also need an appeal to an independent groups, which is, like, is this policy fair? Was this–? Like is this piece of content really getting on the wrong side of the balance of free expression and safety? And I just don’t think at the end of the day that that’s something that you want centralized in a single company. So now the question is how do you design that system and that’s a real question, right, so that we don’t pretend to have the answers on this. What we’re basically working through is we have a draft proposal and we’re working with a lot of experts around the world to run a few pilots in the first half of this year that can hopefully we can codify into something that’s a longer term thing. But I just, I believe that this is just an incredibly important thing. As a person and if I take aside the role that I have as CEO of the company, I do not want the company being able to make all of those final decisions without a check and balance and accountability, so I want to use the position that I’m in to help build that kind of an institution.

Jonathan Zittrain: 是。 And when we talk about an appeal, then, it sounds like you could appeal two distinct things. One is this was the rule but it was applied wrong to me. This, in fact, was parody [ph?] so it shouldn’t be seen as near the line.

Mark Zuckerberg: 是啊。

Jonathan Zittrain: And I want the independent body to look at that. The other would be the rule is wrong. The rule should change because–

Mark Zuckerberg: 是啊。

Jonathan Zittrain: And you’re thinking the independent body could weigh in on both of those?

Mark Zuckerberg: 是啊。 Over time, I would like the role of the independent oversight board to be able to expand to do additional things as well. I think the question is it’s hard enough to even set something up that’s going to codify the values that we have around expression and safety on a relatively defined topic.

Jonathan Zittrain: 是啊。

Mark Zuckerberg: So I think the question is if you kind of view this as an experiment in institution building where we’re trying to build this thing that is going to have real power toJonathan Zittrain: 是。

Mark Zuckerberg: I mean, like, I will not be able to make a decision that overturns what they say. Which I think is good. I think also just it raises the stakes. You need to make sure we get this right, so.

Jonathan Zittrain: It’s fascinating. It’s huge. I think the way you’re describing it, I wouldn’t want to understate–

Mark Zuckerberg: 是啊。

Jonathan Zittrain: That this is not a usual way of doing business.

Mark Zuckerberg: Yeah, but I think it– I think this is– I really care about getting this right.

Jonathan Zittrain: 是啊。

Mark Zuckerberg: But I think you want to start with something that’s relatively well-defined and then hopefully expand it to be able to cover more things over time. So in the beginning I think one question that could come up is my understanding, I mean, it’s always dangerous talking about legal precedence when I’m, this might be one of my first times at Harvard Law School. I did not spend a lot of time here

Mark Zuckerberg: When I was an undergrad. But, you know what I mean, the, if the Supreme Court overturns something, they don’t tell Congress what the law should be, they just say there’s an issue here, right. And then basically there’s a process.行。 So if I’m getting that wrong

Mark Zuckerberg: 行。 I shouldn’t have done that.

Jonathan Zittrain: No, no. That’s quite honest. [ph?]

Mark Zuckerberg: I knew that was dangerous.

Mark Zuckerberg: And that that was a mistake.

Jonathan Zittrain: There are people who do agree with you.

Mark Zuckerberg: Okay. Oh, so that’s an open question that that’s how it works.

Jonathan Zittrain: It’s a highly debated question, yes.

Mark Zuckerberg: 行。

Jonathan Zittrain: There’s the I’m just the umpire calling balls and strikes and in fact, the first type of question we brought up, which was, “Hey, we get this is the standard. Does it apply here?” lends itself a little more to, you know, you get three swings and if you miss them all, like, you can’t keep playing. The umpire can usher you away from the home plate. This is, I’m really digging deep into my knowledge now of baseball. There’s another thing about, like,–

Mark Zuckerberg: That’s okay. I’m not the person who’s going to call you out on getting something wrong there.

Jonathan Zittrain: I appreciate that.

Mark Zuckerberg: That’s why I also need to have a librarian next to me at all times.

Jonathan Zittrain: Very good. I wonder how much librarians tend to know about baseball.

Mark Zuckerberg: Aww.

Jonathan Zittrain: But we digress. Ah, we’re going to get letters, mentions.

Mark Zuckerberg: 是啊。

Jonathan Zittrain: But whether or not the game is actually any good with a three strikes rule, maybe there should be two or four or whatever, starts to ask of the umpire more than just, you know, your best sense of how that play just went. Both may be something. Both are surely beyond standard customer service issues, so both could maybe be usefully externalized. What you’d ask the board to do in the category one kind of stuff maybe it’s true that, like, professional umpirage [ph?] could help us and there are people who are jurists who can do that worldwide. For the other, whether it’s the Supreme

Jonathan Zittrain: –court, or the so-called common law and state courts where often a state supreme court will be like, “Henceforth, 50 feet needs to be the height of a baseball net,” and like, “If you don’t agree, Legislature, we’ll hear from you, but until then it’s 50 feet.” They really do kind of get into the weeds. They derive maybe some legitimacy for decisions like that from being close to their communities, and it really regresses them to a question of: Is Facebook a global community, a community of 2.X billion people worldwide, transcending any national boundaries, and for which I think so far on these issues, it’s meant to be, “The rule is the rule,” it doesn’t really change in terms of service from one place to anotherversus how much do we think of it as somehow localized– whether or not localized through governmentbut where different local communities make their own judgments?

Mark Zuckerberg: That is one of the big questions. I mean, right now we have community standards that are global. We follow local laws, as you say. But I think the idea is– I don’t think we want to end up in a place where we have very different norms in different places, but you want to have some sense of representation and making sure that the body that can deliberate on this has a good diversity of views. So these are a lot of the things that we’re trying to figure out, is like: Well, how big is the body? When decisions are made, are they made by the whole body, or do you have panels of people that are smaller sets? If there are panels, how do you make sure that you’re not just getting a random sample that kind of skews in the values perspective towards one thing? So then there a bunch of mechanisms like, okay, maybe one panel that’s randomly constituted decides on whether the board will take up a question or one of the issues, but then a separate random panel of the group actually does the decisions, so that way you eliminate some risk that any given panel is going to be too ideologically skewed. So there’s a bunch of things that I think we need to think through and work through, but the goal on this is to, over time, have it grow into something that can provide greater accountability and oversight to potentially more of the hard questions that we face, but I think it’s so high-stakes that starting with something that’s relatively defined is going to be the right way to go in the beginning. So regardless of the fact that I was unaware of the controversy around the legal point that I made a second ago, I do think in our case it makes sense to start with not having this group say what the policies are going to be, but just have there be– have it be able to say, “Hey, we think that you guys are on the wrong side on this, and maybe you should rethink where the policy is because we think you’re on the wrong side.” There’s one other thing that I think is worth calling out, which is in a typical kind of judicial analog, or at least here in the U.S., my understanding, is there’s the kind of appeal route to the independent board considering an issue, but I also think that we want to have an avenue where we as the company can also just raise hard issues that come up to the board without having– which I don’t actually know if there’s any mechanism for that.

Jonathan Zittrain: It’s called an advisory opinion.

Jonathan Zittrain: But under U.S. federal law, it’s not allowed because of Article III Case or Controversy requirement, but state courts do it all the time. You’ll have a federal court sometimes say– because it’s a federal court but it’s deciding something under state law. It’ll be like, “I don’t know, ask Florida.” And they’ll be like, “Hey Florida,” and then Florida is just Florida.

Mark Zuckerberg: Sure. So I think that–

Jonathan Zittrain: So you can do an advisory opinion.

Mark Zuckerberg: –that’ll end up being an important part of this too. We’re never going to be able to get out of the business of making frontline judgments. We’ll have the AI systems flag content that they think is against policies or could be, and then we’ll have people– this set of 30 thousand people, which is growing– that is trained to basically understand what the policies are. We have to make the frontline decisions, because a lot of this stuff needs to get handled in a timely way, and a more deliberative process that’s thinking about the fairness and the policies overall should happen over a different timeframe than what is often relevant, which is the enforcement of the initial policy. But I do think overall for a lot of the biggest questions, I just want to build a more independent process.

Jonathan Zittrain: Well, as you say, it’s an area with fractal complexity in the best of ways, and it really is terra incognito, and it’d be exciting to see how it might be built out. I imagine there’s a number of law professors around the world, including some who come from civil rather than common law jurisdictions, who are like, “This is how it works over here,” from which you could draw. Another lingering question would be– lawyers often have a bad reputation. I have no idea why. But they often are the glue for a system like this so that a judge does not have to be oracular or omniscient. There’s a process where the lawyer for one side does a ton of work and looks at prior decisions of this board and says, “Well, this is what would be consistent,” and the other lawyer comes back, and then the judge just gets to decide between the two, rather than having to just know everything. There’s a huge tradeoff here for every appealed content decision, how much do we want to build it into a case, and you need experts to help the parties, versus they each just sort of come before Solomon and say, “This kind of happened,” and– or Judge Judy maybe is a more contemporary reference.

Mark Zuckerberg: Somewhere between the two, yeah.

Jonathan Zittrain: 是啊。 So it’s a lot of stuff– and for me, I both find myself– I don’t know if this is the definition of prurient– both excited by it and somewhat terrified by it, but very much saying that it’s better than a status quo, which is where I think you and I are completely agreeing, and maybe a model for other firms out there. So that’s the last question in this area that pops to my mind, which is: What part of what you’re developing at Facebook– a lot of which is really resource-intensive– is best thought of as a public good to be shared, including among basically competitors, versus, “That’s part of our comparative advantage and our secret sauce”? If you develop a particularly good algorithm that can really well detect fake news or spammers or bad actors– you’ve got the PhDs, you’ve got the processors– is that like, “In your face, Schmitter [ph?],” or is like, “We should have somebody that– some body– that can help democratize that advance”? And it could be the same to be said for these content decisions. How do you think about that?

Mark Zuckerberg: Yeah, so certainly the threat-sharing and security work that you just referenced is a good area where there’s much better collaboration now than there was historically. I think that that’s just because everyone recognizes that it’s such a more important issue. And by the way, there’s much better collaboration with governments now too on this, and not just our own here in the U.S., and law enforcement, but around the world with election commissions and law enforcement, because there’s just a broad awareness that these are issues and that–

Jonathan Zittrain: Especially if you have state actors in the mix as the adversary.

Mark Zuckerberg: 是。 So that’s certainly an area where there’s much better collaboration now, and that’s good. There’s still issues. For example, if you’re law enforcement or intelligence and you have developed a– “source” is not the right word– but basically if you’ve identified someone as a source of signals that you can watch and learn about, then you may not want to come to us and tell us, “Hey, we’ve identified that this state actor is doing this bad thing,” because then the natural thing that we’re going to want to do is make sure that they’re not on our system doing bad things, or that they’re not– either they’re not in the system at all or that we’re interfering with the bad things that they’re trying to do. So there’s some mismatch of incentives, but as you build up the relationships and trust, you can get to that kind of a relationship where they can also flag for you, “Hey, this is what we’re at.” So I just think having that kind of baseline where you build that up over time is helpful. And I think on security and safety is probably the biggest area of that kind of collaboration now, across all the different types of threats; not just election and democratic process type stuff, but any kind of safety issue. The other area where I tend to think about what we’re doing is– it should be open– is just technical infrastructure overall. I mean, that is probably a less controversial piece, but we open-source a lot of the basic stuff that runs our systems, and I think that that is a– that’s a contribution that I’m quite proud of that we do.

We have sort of pioneered this way of thinking about how people connect, and the data model around that is more of a graph, and the idea of graph database and a lot of the infrastructure for being able to efficiently access that kind of content I think is broadly applicable beyond the context of a social network.

When I was here as an undergrad, even though I wasn’t here for very long, I studied psychology and computer science, and to me– I mean, my grounding philosophy on this stuff is that basically people should be at the center of more of the technology that we build. I mean, one of the early things that I kind of recognized when I was a student was like– at the time, there were internet sites for finding almost anything you cared about, whether it’s books or music or news or information or businesses– but as people, we think about the world primarily in terms of other people, not in terms of other objects, not cutting things up in terms of content or commerce or politics or different things, but it’s like– the stuff should be organized around the connections that people have, where people are at the centerpiece of that, and one of the missions that I care about is over time just pushing more technology development in the tech industry overall to develop things with that mindset. I think– and this is a little bit of a tangentbut the way that our phones work today, and all computing systems, organized around apps and tasks is fundamentally not how people– how our brains work and how we approach the world. It’s not– so that’s one of the reasons why I’m just very excited longer-term about especially things like augmented reality, because it’ll give us a platform that I think actually is how we think about stuff. We’ll be able to bring the computational objects into the world but fundamentally we’ll be interacting as people around them. The whole thing won’t be organized around an app or a task; it’ll be organized around people, and that I think is a much more natural and human system for how our technology should be organized. So opensourcing all of that infrastructure– to do that, and enabling not just us but other companies to kind of get that mindset into more of their thinking and the technical underpinning of that, is just something that I care really deeply about.

Jonathan Zittrain: Well, this is nice, and this is bringing us in for our landing, because we’re talking about 10, 20, 30 years ahead. As a term of art, I understand augmented reality to mean, “I’ve got a visor”version 0.1 was Google Glass– something where I’m kind of out in the world but I’m literally online at the same time because there’s data coming at me in some– that’s what you’re talking about, correct?

Mark Zuckerberg: Yeah, although it really should be glasses like what you have. I think we’ll probablymaybe they’ll have to be a little bigger, but not too much bigger or else it would start to get weird.

Mark Zuckerberg: So I don’t think a visor is going to catch. I don’t think anyone is psyched about that feature.

Jonathan Zittrain: And anything involving surgery starts to sound a little bad too.

Mark Zuckerberg: No, no, we’re definitely focused on–

Mark Zuckerberg: –on external things. Although–

Jonathan Zittrain: Like, “Don’t make news, don’t make news, don’t make news.”

Mark Zuckerberg: 不不不。 Although we have showed this demo of basically can someone type by thinking, and of course when you’re talking about brain-computer interfaces, there’s two dimensions of that work. There’s the external stuff, and there’s the internal stuff, and invasive, and yes, of course if you’re actually trying to build things that everyone is going to use, you’re going to want to focus on the noninvasive things.

Jonathan Zittrain: 是。 Can you type by thinking?

Mark Zuckerberg: You can.

Jonathan Zittrain: It’s called a Ouija Board. No. But you’re subvocalizing enough or there’s enough of a read of–

Mark Zuckerberg: 不不不。 So there’s actually a bunch of the research here– there’s a question of throughput and how quickly can you type and how many bits can you express efficiently, but the basic foundation for the research is someone– a bunch of folks who are doing this research showed a bunch of people images– I think it was animals– so, “Here’s an elephant, here’s a giraffe”– while having kind of a net on their head, noninvasive, but shining light and therefore looking at the level of blood activity andjust blood flow and activity in the brain– trained a machine learning basically on what the pattern of that imagery looked like when the person was looking at different animals, then told the person to think about an animal, right? So think about– just pick one of the animals to think about, and can predict what the person was thinking about in broad strokes just based on matching the neural activity. So the question is, so you can use that to type.

Jonathan Zittrain: Fifth amendment implications are staggering.

Jonathan Zittrain: 抱歉。

Mark Zuckerberg: Well, yes. I mean, presumably this would be something that someone would choose to use a product. I’m not– yeah, yeah. I mean, yes, there’s of course all the other implications, but yeah, I think that this is going to be– that’s going to be an interesting thing down the line.

Jonathan Zittrain: But basically your vision then for a future–

Mark Zuckerberg: I don’t know how we got onto that.

Jonathan Zittrain: You can’t blame me. I think you brought this up.

Mark Zuckerberg: I did, but of all the things that– I mean, this is exciting, but we haven’t even covered yet how we should talk about– tech regulation and all this stuff I figured we’d get into. I mean, we’ll be here for like six or seven hours. I don’t know how many days you want to spend here to talking about this, but–

Jonathan Zittrain: “We’re here at the Zuckerberg Center and hostage crisis.”

Jonathan Zittrain: “The building is surrounded.”

Mark Zuckerberg: 是啊。 But I think a little bit on future tech and research is interesting too, so.

Jonathan Zittrain: Please.

Mark Zuckerberg: Yeah, we’re good.

Jonathan Zittrain: Oh, we did cover it, is what you’re saying.

Mark Zuckerberg: I mean, but going back to your question about what– if this is the last topic– what I’m excited about for the next 10 or 20 years– I do think over the long term, reshaping our computing platforms to be fundamentally more about people and how we process the world is a really fundamental thing. Over the nearer term– so call it five years– I think the clear trend is towards more private communication. If you look at all of the different ways that people want to share and communicate across the internet– but we have a good sense of the cross-strength, everything from one-on-one messages to kind of broadcasting publicly– the thing that is growing the fastest is private communication.对?

So between WhatsApp and Messenger, and Instagram now, just the number of private messages– it’s about 100 billion a day through those systems alone, growing very quickly, growing much faster than the amount that people want to share or broadcast into a feed-type system. Of the type of broadcast content that people are doing, the thing that is growing by far the fastest is stories.对?

So ephemeral sharing of, “I’m going to put this out, but I want to have a timeframe after which the data goes away.” So I think that that just gives you a sense of where the hub of social activity is going. It also is how we think about the strategy of the company. I mean, people– when we talk about privacy, I think a lot of the questions are often about privacy policies and legal or policy-type things, and privacy as a thing not to be breached, and making sure that you’re within the balance of what is good. But I actually think that there’s a much more– there’s another element of this that’s really fundamental, which is that people want tools that give them new contexts to communicate, and that’s also fundamentally about giving people power through privacy, not just not violating privacy, right? So not violating privacy is a backstop, but actually– you can kind of think about all the success that Facebook has had– this is kind of a counterintuitive thing– has been because we’ve given people new private or semi-private ways to communicate things that they wouldn’t have had before.

So thinking about Facebook as an innovator in privacy is certainly not the mainstream view, but going back to the very first thing that we did, making it so Harvard students could communicate in a way that they had some confidence that their content and information would be shared with only people within that community, there was no way that people had to communicate stuff at that scale, but not have it either be completely public or with just a small set of people before. And people’s desire to be understood and express themselves and be able to communicate with all different kinds of groups is, in the experience that I’ve had, nearly unbounded, and if you can give people new ways to be able to communicate safely and express themselves, then that is something that people just have a deep thirst and desire for.

So encryption is really important, because I mean, we take for granted in the U.S. that there’s good rule of law, and that the government isn’t too much in our business, but in a lot of places around the world, especially where WhatsApp is the biggest, people can’t take that for granted. So having it so that you really have confidence that you’re sharing something one-on-one and it’s not– and it really is one-on-one, it’s not one-on-one and the government there– actually makes it so people can share things that they wouldn’t be comfortable otherwise doing it. That’s power that you’re giving people through building privacy innovations.

Stories I just think is another example of this, where there are a lot of things that people don’t want as part of the permanent record but want to express, and it’s not an accident that that is becoming the primary way that people want to share with all of their friends, not putting something in a feed that goes on their permanent record. There will always be a use for that too– people want to have a record and there’s a lot of value that you can build around that– you can have longer-term discussions– it’s harder to do that around stories. There’s different value for these things. But over the next five years, I think we’re going to see all of social networking kind of be reconstituted around this base of private communication, and that’s something that I’m just very excited about. I think that that’s– it’s going to unlock a lot of people’s ability to express themselves and communicate things that they haven’t had the tools to do before, and it’s going to be the foundation for building a lot of really important tools on top of that too.

Jonathan Zittrain: That’s so interesting to me. I would not have predicted that direction for the next five years. I would have figured, “Gosh, if you already know with whom you want to speak, there are so many tools to speak with them,” some of which are end-to-end, some of which aren’t, some of which are rollyourown and open-source, and there’s always a way to try to make that easier and better, but that feels a little bit to me like a kind of crowded space, not yet knowing of the innovations that might lie ahead and means of communicating with the people you already know you want to talk to. And for that, as you say, if that’s where it’s at, you’re right that encryption is going to be a big question, and otherwise technical design so that if the law comes knocking on the door, what would the company be in a position to say.

This is the Apple iPhone Cupertino– sorry, San Bernardino case– and it also calls to mind will there be peer-to-peer implementations of the things you’re thinking about that might not even need the server at all, and it’s basically just an app that people use, and if it’s going to deliver an ad, it can still do that appside, and how much governments will abide it. They have not, for the most part, demanded technology mandates to reshape how the technology works. They’re just saying, “If you’ve got it”– in part you’ve got it because you want to serve ads– “we want it.” But if you don’t even have it, it’s been rare for the governments to say, “Well, you’ve got to build your system to do it.” It did happen with the telephone system back in the day. CALEA, the Communications Assistance to Law Enforcement Act, did have federal law in the United States saying, “If you’re in the business of building a phone network, AT&T, you’ve got to make it so we can plug in as you go digital,” and we haven’t yet seen those mandates in the internet software side so much. So we can see that coming up again. But it’s so funny, because if you’d asked me, I would have figured it’s encountering people you haven’t met before and interacting with them, for which all of the stuff about air traffic control of what goes into your feed and how much your stuff gets shared– all of those issues start to rise to the fore, and it gets me thinking about, “I ought to be able to make a feed recipe that’s my recipe, and fills it according to Facebook variables, but I get to say what the variables are.” But I could see that if you’re just thinking about people communicating with the people they already know and like, that is a very different realm.

Mark Zuckerberg: It’s not necessarily– it’s not just the people that you already know. I do think– we’ve really focused on friends and family for the last 10 or 15 years, and I think a big part of what we’re going to focus on now is around building communities in different ways and all the utility that you can build on top of, once you have a network like this in place. So everything from how people can do commerce better to things like dating, which is– a lot of dating happens on our services, but we haven’t built any tools specifically for that.

Jonathan Zittrain: I do remember the Facebook joint experiment– “experiment” is such a terrible wordstudy, by which one could predict when two Facebook members are going to declare themselves in a relationship, months ahead of the actual declaration. I was thinking some of the ancillary products were in-laws.

Mark Zuckerberg: That was very early.是啊。 So you’re right that a lot of this is going to be about utility that you can build on top of it, but a lot of these things are fundamentally private, right? So if you’re thinking about commerce, that people have a higher expectation for privacy, and the question is: Is the right context for that going to be around an app like Facebook, which is broad, or an app like Instagram?

I think part of it is– the discovery part of it, I think we’ll be very well served there– but then we’ll also transition to something that people want to be more private and secure. Anyhow, we could probably go on for many hours on this, but maybe we should save this for the Round 2 of this that we’ll do in the future.

Jonathan Zittrain: Indeed. So thanks so much for coming out, for talking at such length, for covering such a kaleidoscopic range of topics, and we look forward to the next time we see you.

Mark Zuckerberg: 是啊。谢谢。

Jonathan Zittrain: 谢谢。

马克·扎克伯格已经开始了2019年的公开辩论挑战 – 这是第一次的亮点


马克·扎克伯格(Mark Zuckerberg)已经开始了他在2019年参与有关Facebook的公开辩论的个人挑战。

每年,这位34岁的亿万富翁首席执行官都经常面临异想天开的挑战 – 从他自己杀死的动物吃肉到建立AI助手以帮助他的家。对于2019年,经过几个月对Facebook无休止丑闻的持续批评,扎克伯格说“每隔几周我就会与来自不同领域的领导者,专家和社区人士交谈,我会尝试不同的格式来保持它的趣味性。”

在将近两个月的时间里,我们现在看到了第一次:与哈佛大学法学院的Jonathan Zittrain教授进行了讨论。

以下是一些关键的亮点,从扎克伯格对区块链潜力的反思到尴尬的失态。

首席执行官辩称,人们并不真的愿意付钱不看广告。

讨论的一个主题是用户为访问而付费的无广告Facebook的想法。

“我个人并不认为很多人都愿意为没有广告而付钱。我们所有的研究,可能仍然是最合适的提供,作为一个选择下线,但我所看到的所有数据都表明,广大的,绝大多数人都希望获得免费服务,并且广告在很多地方甚至与有机内容的质量都不同。人们能够看到,“他说。

值得注意的是,Facebook的业务多年来通过其广告服务赚取了数十亿美元,而且改变其方法将是劳动密集型且可能成本高昂。该信息最近报道称Facebook过去曾考虑过一种订阅模式 – 但部分原因是因为担心Facebook最富裕的用户,对广告商来说最赚钱的用户,也是最有可能的用户。支付服务费用。

马克扎克伯格讨论了Facebook有朝一日如何实施区块链。

Facebook有一个团队致力于区块链技术并不是什么秘密 – 但它一直保持着它正在进行的工作。

扎克伯格建议该技术可用于对应用和服务上的人进行身份验证,就像现在使用Facebook Connect(公司的登录工具)一样。

“我一直在考虑的事情之一是使用区块链……围绕身份验证,基本上允许访问不同服务的信息。所以基本上用完全的东西取代我们对Facebook Connect的概念基本上你把你的信息,你存储在一些分散的系统上,你可以选择登录你不通过中间人的地方,“他说。

马克扎克伯格说,人们不希望他们的客厅里有摄像头 – 似乎忘记了Facebook已经建造了这样的东西。

在一次关于隐私的交流中,扎克伯格说道:“我们绝对不希望一个社会,每个人的起居室里都有一台摄像机,可以看到这些谈话的内容。”

唯一的问题? Facebook正在以门户网站的形式销售这款产品,这是一款由亚马逊Alexa提供支持的智能扬声器,可让您与朋友进行视频聊天。

Facebook可能会转向用户众包事实检查。

Facebook的事实检查计划看似混乱,像Snopes这样的第三方合作伙伴退出并对这些努力持批评态度。扎克伯格建议,将来,Facebook可能会试图转而不再使用专家来支持普通用户的众包工作。

“我认为,随着时间的推移,我们想要试图获得的真实情况更多的是众包模式,其中人们,不是人们相信某种形式,一些基本的专家,他们是认可的,但是在一些崇高的机构中在其他地方。就像你信任,是的,如果你从合理看待的人群中获得足够的数据点,“他说。

这种方法存在一些明显的问题。当人们普遍认为有恶作剧时,“众包”是找到真相的正确方法吗 – 就像疫苗危险的想法,或希拉里克林顿是秘密恋童癖戒指的负责人一样?而且,正如Zittrain指出的那样,如果你转向用户进行事实检查,那么系统被坏人玩游戏的可能性就会上升。

扎克伯格补充说:“这里有很多问题,这就是为什么我不会坐在这里宣布新计划。”


你在Facebook工作吗?有提示吗? 通过Signal或WhatsApp联系本报记者,电话:+1(650)636-6268,使用非工作电话,发送电子邮件至rprice@businessinsider.com,电报或微信在robaeprice,或Twitter DM at @robaeprice。 (请通过电子邮件宣传PR。)您也可以通过SecureDrop安全地联系Business Insider。

Citymapper宣布推出多种运输方式的订购服务 – TechCrunch


Citymapper正在成为一家金融科技创业公司。该公司宣布为位于伦敦的用户提供名为Citymapper Pass的预付卡。这个新产品既是订阅服务,可以汇总您的所有运输订阅,也可以使用塑料卡来支付您的游乐设施。

根据Wired,Citymapper将从两个每周订阅套餐开始。每周30英镑,您可以完全访问TfL网络上的1区和2区。如果您每周额外支付10英镑,您还可以使用Citymapper's获得无限次桑坦德骑行和两次骑行 乘车共享服务。

这对伦敦上班族来说并不是一个革命性的,但这是一个开始。最终,该创业公司希望增加更多的运输方式,从无人自行车到电动滑板车和其他私人网络。但是,由于创业公司需要与每家公司签订协议,这将会更复杂一些。

你可以想象用你喜欢的运输方法创建一个自定义包,并为所有服务支付一次。更有趣的是,塑料卡是一张很好的旧预付卡。您可以充值余额,就像充值Revolut帐户一样,如果您前往不同的区域,也可以使用该卡。

该卡应与Apple Pay和Google Pay兼容。如果您经常旅行,Citymapper可让您随时暂停订阅 – 没有长期承诺。

随着城市交通变得更加分散,Citymapper希望充当聚合器。许多人已经依赖应用程序来计算行程。但创业公司现在想要超越地图。它也可以是一种通过服务获利的方式。您可以在3月或4月订阅Citymapper Pass。

如何取消Twitch / YouTube / Mixer订阅


欢迎来到TNW Basics,这是一系列有关如何充分利用您的小工具,应用程序和其他内容的提示,技巧,指南和建议。

订阅直播和视频网站上的频道是支持喜爱的内容创建者的好方法,每月只需几美元。在理想的世界中,我们都能无限期地订阅我们喜爱的频道。但是,嘿,我们明白了。财务问题,支付方式问题,或者渠道兴趣减弱……事情发生了。所以当你想取消订阅时,你会怎么做?

对于任何人来说,这可能不会让人感到意外,但取消付费会员资格的选项要比启动付费会员资格的选项更难找到。通常后者是通道页面正​​面的一个大而闪亮的按钮,而前者则隐藏在选项中。以下是如何取消您提供给他们的三个热门流媒体网站的付费订阅。

抽搐

所有合作伙伴和联盟Twitch都可以订阅 渠道,以及三个不同的层级。为了强调取消订阅的难度,订阅频道后,按钮会从“订阅”变为“礼品订阅”。如果您有有效的付款选项,则订阅将每月自动续订。要查看所有付款选项,请转到右上角下拉菜单下的“订阅”页面。

在这里,您会发现所有活动订阅都以图块的形式排列(已取消,礼品订阅在另一个标签中)。要取消任何订阅,请选择每个订阅磁贴上的齿轮图标,然后选择红色的“不续订”选项。订阅将在本月余下时间保持有效。

YouTube的

由于YouTube已经使用“订阅”一词来表示关注特定频道的任何人,因此付费版本称为“会员资格”。与其他直播服务一样,获取会员资格或“加入”会授予您访问某些特权的权限。这可以是特定于频道的表情符号,品牌商品的折扣或其他内容。 Google支持部门表示,如果您决定成为频道成员,则您的订阅将设置为每月自动续订。

如果您想取消它,您必须转到频道或其中一个视频。在“加入”按钮的位置,您会看到一个显示“查看特权”的按钮。这将带您进入您的频道会员页面。在右上角,点击“您的会员资格”旁边的齿轮图标,您可以在其中找到“结束会员资格和额外津贴”的选项.Google试图通过询问您说明您取消的原因来试图让您感到内疚。如果你不愿意,不得不给出答案。您也可以通过侧栏上的“购买”选项执行此操作。

混合器

微软自主开发的流媒体服务已经发展到足以为渠道提供合作伙伴关系,用户只需支付少量费用即可订购合作渠道(通常为5.99美元)。鉴于Sparks和Embers的崛起是一种支付流媒体的方法,而不必自行支付现金,订阅并不是主要的支持形式 – 但它仍然是获得额外津贴的好方法。

您的所有频道订阅都可在设置中的“结算”标签下找到。在此处,您可以跟踪每个订阅的付款方式,以及订阅是否设置为自动续订。 “取消”选项位于最右侧,为红色。您可能会发现自己比其他选项更多地使用此选项,因为取消和重新订阅是目前在Mixer上更改付款方式的唯一方法。

您还想了解有关流媒体服务的其他信息?让我们知道!

耳机插孔生活! – TechCrunch


关于耳机插孔死亡的报道被夸大了。或者更确切地说,过早。所有最新版本的三星Galaxy手机都配备了一个3.5毫米的端口,抵消了Apple设定的趋势,其次是谷歌。虽然耳机插孔可能最终会在2019年消失,但它仍然存在并且可能成为三星Galaxy S10的四个版本的主要卖点。

随着iPhone 7的推出,苹果在2016年放弃了3.5毫米插孔,我们中的一些人还没有完成它。这个港口已经存在了几代人。 3.5毫米音频插孔通用且方便,允许有人拿起一套耳机,价格在10美元到1000美元之间的耳机,并将其连接到手机上。但唉,苹果公司从iPhone手机中删除了这个端口,其中包括谷歌在内的几家厂商。但不是三星。

虽然业界其他公司拒绝使用3.5毫米插孔,但三星仍然将其包含在最新的智能手机中,并开始将其作为广告功能使用。曾经标准的每款手机,成为三星的卖点。这不是三星第一次逆势而是保持传统功能以吸引买家。

智能手机曾经拥有可扩展的内存,但随着闪存容量的增加,制造商停止在手机上安装MicroSD卡插槽。不是三星。可扩展内存仍然是今天宣布的S10的一个选项。

三星是世界顶级智能手机制造商的原因:它倾听客户的声音,显然客户希望拥有3.5mm耳机插孔的多功能性。我做。

唉,3.5毫米插孔不会永远存在。一旦有更好的解决方案,业界最终​​将超越模拟连接。但那不是现在。今天,在2019年,耳机插孔有三星的朋友。

三星的Galaxy S10有一个内置的Instagram模式 – TechCrunch


经过数周的泄密,三星仍然为今天的活动节省了一些惊喜。其中最有趣的是与Instagram合作,将Stories直接带到相机应用程序。

这是一个有趣的伙伴关系,对双方都有利。对于一些人来说,它可能标志着一种回归到预加载的英国媒体报道软件,但至少在Instagram的情况下,无论如何,该应用对于大多数用户来说几乎无处不在。

该模式今天在舞台上得到了一个简短的演示 – 它几乎是你所期望的,将滤镜直接带到相机软件,让你直接上传到服务,而无需离开三星的默认相机软件。

智能手机制造商近年来越来越难以区分他们的相机产品。 Galaxy系列,iPhone和Pixel等最近几代产品越来越依赖AI / ML /软件更新来区分自己,因此这些合作伙伴关系肯定会在未来发挥作用。

三星终于展示了其新的可折叠智能手机Galaxy Fold


三星周三宣布更多有关其可折叠智能手机的详细信息,称为Galaxy Fold。

折叠手机时,外部显示屏为4.6英寸显示屏。展开后,显示屏将为7.3英寸。

随着公告的发布,这个故事正在更新。

Made.com创始人Ning Li推出化妆品创业公司Typology – TechCrunch


Meet Typology是一家总部位于巴黎的新创业公司,专门为消费者提供优质的护肤品和化妆品产品。该创业公司由Made.com联合创始人李宁创立,并于今天正式启动。

“类型学是一个相对雄心勃勃的项目。我们想挑战快速消费品 [fast-moving consumer goods] 拥有数字纯玩家的品牌,“李宁告诉我。 “我的所有职业生涯都是在电子商务领域工作。我看到很多行业都从线下转向在线。但一些行业,如化妆品,食品和自己动手,已经慢慢转向在线渠道。“

它以一系列值开头。类型学希望通过简单的成分列表与化妆品巨头区别开来,并且不会对皮肤或环境产生危险的产品。该公司还承诺其所有产品均为纯素,无残酷且在法国制造。

因此,启动会勾选所有正确的方框。但如果你一直在关注崭露头角的护肤品公司,那么有无数的品牌会做出同样的承诺。

主要区别在于,Typology不想成为另一个小批量美容品牌。该团队希望创建一个拥有多个子品牌,数百种产品和积极的电子商务战略的电子商务巨头。

“联合利华,欧莱雅和宝洁占据了50%以上的市场份额。而另一方面,你有很多自主品牌,这些品牌规模很小,可能永远不会脱颖而出,“李宁说。

Typology计划在未来几个月推出10个不同的产品线。每条生产线都有自己的概念和自己的子品牌。一切都是内部开发的。

今天,初创公司正在推出3个子品牌。 'Raw'就是在家里混合产品。你可以订购一个工具包,你会得到油,粉末,勺子和一个小盒子来制作你自己的面膜,发油,胡须油等。你也可以单独订购每种产品 – 原料产品只能用一种成分制成。

在“实验室”产品系列中,您只能找到化妆品血清。该公司目前已经推出了6种不同的小瓶子。每种血清都有自己的一套属性,具体取决于您的需求。

最后,'Ten'产品是基本的护肤产品,含有少于10种成分。该公司从面部,手部和身体保湿开始。不久,该创业公司还将推出沐浴露,洗发水,胶束水和卸妆液。

在品牌和包装方面,Typology将一切都放在极简设计上。我相信品牌专家会告诉你,干净的白色标签意味着透明和简洁。同样值得注意的是,Typology是一个男女皆宜的品牌。

该公司希望通过依靠玻璃和铝材尽可能地使用可回收包装 – 如果您订购更大的产品,您将获得塑料瓶。

目前,Typology仅在法国上市,但该公司计划很快扩展到其他欧洲国家。他们可能意味着它,因为他们已经提出了一个重要的种子轮。

该创业公司已经从Alven Capital,Marc Simoncini,Xavier Niel和Firstminute Capital筹集了1000万美元的资金。现在有12人在Typology工作。

一些子品牌可能会即时点击,而其他子品牌可能不会吸引那么多客户。类型学正在利用其银行账户来尝试许多不同的事情,并在定位方面进行实验。看看多年来产品阵容如何发展将会很有趣。

如何为您选择合适的乐器


这是一系列探索STEM工作者,学生和企业家学习创作音乐的第二篇文章。第一篇文章解释了为什么演奏乐器或制作音乐有很多好处,而不仅仅是那些与听音乐有关的好处。在本文中,我们将帮助您找到合适的入门工具。

希望我们已经说服你拿起乐器并学会演奏。如果您不确定从哪里开始,请不要担心:我们已经涵盖了这一点。它就像1,2,3一样简单:

  1. 选择一种乐器。
  2. 买一个乐器。
  3. 玩。

你开始使用什么仪器并不重要。你可以采取愚蠢的性格测试来确定哪一个适合你。如果您想要挑战,请尝试学习在公寓大楼中演奏长号。想要超级简单的东西?得到一个kazoo。重要的是,它是你的工具 玩。

Daniel Levitin博士,神经科学家,“这是你的音乐大脑”一书的作者和音乐家,被告知 TNW

选择一种能发出你喜欢的声音的乐器非常重要。许多人在他们还是孩子时会爱上特定乐器的声音。这是一个伟大的动力。有一个伟大的钢琴家阿图尔鲁宾斯坦的故事,他三岁时就向他的父母要了一架钢琴。他们不想让他弹钢琴,所以他们给他买了一把小提琴。他砸了它。他们心软了,给他买了一架钢琴。

这个故事的寓意是:选择你喜欢的声音并获得制作它的乐器。如果你的父母给你拉小提琴,就把它砸在他们面前。即使你想要拉小提琴 – 在这些关系中建立统治地位很重要,否则人们会走遍你(实际上这可能是一个糟糕的建议)。

好消息是您不必购买定制的挡泥板 的Stratocaster 或罗兰的顶级V-drum开始使用。几乎所有你想要学习的乐器都有适合初学者的道路。这意味着总有一个相对便宜的选择可供学习。为了简洁起见,我们只讨论几种类型的乐器 – 主要是那些与车库卡纸乐队相关的乐器 – 但是你演奏的乐器取决于你。

我们建议你开始便宜。如果你愿意的话,你可以跳出大门并购买一些有价值的东西,但是当你是初学者时,获得一个初学者的模型并没有错。如果钱不是对象,那么,无论如何,购买一架三角钢琴作为你的起动键盘。否则,请确切了解您想要花多少钱并在该范围内研究仪器。

比Xbox便宜

吉他,贝司,键盘,以及 – 在较小程度上 – 鼓,以及大多数其他传统乐器,如果你愿意购物一点,都可以拿到不到250美元。像任何爱好一样,演奏乐器的价格和你想要的一样贵。如果你真的节俭,你可以制作雪茄盒吉他或玩水桶。只是发出一些声音,它会减轻压力。

另外,不要睡在桶上:

以下是一些可以帮助您入门的建议,但您需要根据自己的个人喜好进一步研究。

信用:挡泥板

传奇的Fender Stratocaster,玩家系列:674.99美元,可在亚马逊上购买

吉他:

信用:挡泥板

Fender American Performer Mustang低音吉他,美丽的野兽:亚马逊上的1,199.99美元。

贝司:

信用:罗兰

看哪:Rolands Ax-Edge Keytar。亚马逊上$ 999.00。

键盘:

信用:罗兰

Roland V-Compact系列电子鼓套件(TD-17KV-S)。亚马逊上的1199.99美元。

鼓:

  • Ludwig LC178X025 Questlove Pocket Kit 4件套鼓组:一种价格低廉且体积小巧的鼓组,类似于 '鼓手Questlove。

  • Pyle PED041:便宜的电子鼓套件的唯一选择之一。不要指望花里胡哨,但你可以学会玩它。

  • Alesis CompactKit 7:适合打鼓和学习棒技能的单圈选项 – 适合那些买不起套装或没有空间的人。

这些列出的选项都不超过250美元。经过适当的照顾,每个人都应该持续数年。但如果你不喜欢传统乐器,那还有适合你的东西。

您可能会发现节拍机器或模拟合成器更适合您的风格。如果你仍然不相信,那就想象一下自己采样乙烯基或玩Theremini。

穆格永远不会出错。

音乐适合每个人。那些有身体限制阻止他们演奏传统乐器的人今天也有选择。由Dave Whalen发明的Jamboxx是一个设计的仪器的例子,所以任何人,无论身体能力如何都可以发挥它。

如果我们还没有找到适合你的梦想乐器,请不要担心。我们说我们有一个建议 大家我们的意思是Zakk Wylde的胡子。因为我们保存了让你玩的机器 实际的闪电 最后。我们为您提供Erica Synths和Gamechanger Audio的等离子驱动器:

刚开始玩

一旦你购买了仪器,掌握的道路就像其他旅程一样开始:只需一步。你不是一个人,但是讨论技术使学习乐器比以往任何时候都更容易的方法超出了一篇文章的范围。

我们将仔细研究一下学习应用程序,例如Fender's Play,以及来自Moog,Erica Synths,Native Instruments等公司的高科技硬件。 罗兰在接下来的几周里。与此同时,我们希望你能考虑从你的地下室挖出那把旧吉他,或者尽早尝试编写合成器。播放音乐是降低压力和焦虑的有效方法。莱维丁博士告诉我们,尽可能少 五分钟的练习 每天可以提供直接的身心健康益处。

一个挑战

所以这就是我们对您的挑战:每隔一天用练习乐器取代15分钟滚动浏览新闻,社交媒体或游戏。科学说你会在心理和身体方面感觉更好,你的心情会更好。

Jackie DeShannon唱的是,爱是唯一没有太少的东西,但我们不同意。世界需要更多的音乐家。

发布于2019年2月20日 – 16:59 UTC