Facebook-Can Ethics Scale in the Digital Age?
Since its founding in 2004, Facebook has built a phenomenally successful business at global scale to become the fifth most valuable public company in the world. The revelation of Cambridge Analytica events in March 2018, where 78 million users' information was leaked in a 2016 U.S. election cycle, exposed a breach of trust/privacy among its user community. In the past, growth at any costs appeared to be the de facto strategy. Now many voices such as regulators, advertisers, ethicists, shareholders and users argued for a more responsible approach to addressing their concerns. Mark Zuckerberg (CEO/Chair/Founder) and Sheryl Sandberg (COO) mapped out their six-point plan to address this existential threat. Could they continue to grow and rectify the breach of trust/privacy? Did other stakeholders have some greater responsibility too? In addition to issues of privacy and trust, there is a growing chorus of concern about "content moderation"-not for the easy topics like spam or copyright material-but for the hard things revolving around political points of view, hate speech, polarizing perspectives, etc. How will Facebook strike the balance between free speech and corrosive content across billions of users and dozens of languages? Are they the arbiters of truth/censorship in the digital world?