Facebook CEO Mark Zuckerberg Testimony: Key Takeaways

The Facebook scandal and why we need to get better at social system design

Like many people who were advocates for social tools in the early 2000’s, I regret that the comfortable world of collegiate, respectful debate and empathetic connection we enjoyed as early bloggers morphed into the dystopian world of Facebook manipulation, /b/, trolling and online misogyny, and helped bring about Brexit, Trump and even the Chinese state proposing to use social credit to deny access to government services.

We should have known better. When the first newspapers opened up comments and random, unconnected people were given a way to shout into the void they grabbed it with both hands. And yet, in the more intimate communities we inhabited, people continued to behave in a relatively normal, respectful way.  Even today, you can find places like Kialo or Quora, where people take time to answer questions, engage in debate and share knowledge; but even a site like Reddit that can sometimes showcase hilarious and heart-warming community behaviours is also sometimes a very dark place if you take a wrong turn.

Given what we know about human nature and how people can behave when the constraints of social norms or rules are removed, we need to take great care when designing social systems of any kind, and history is littered with warnings about what happens when we get it wrong, or when we are blind to the unintended consequences of breaking down the old order.

Cambridge Analytica and hacking democracy

The current scandal about the mis-use of Facebook data to manipulate elections feels like a pivotal moment in the recent history of the internet and its growing power over societies around the world.  at the Guardian has done a great job of tracing how Cambridge Analytica mis-used data originally obtained via a Facebook psychology quiz to influence elections and manipulate the media, fending off legal threats for two years until Facebook finally admitted the story was true.

Facebook now faces a real challenge to maintain public trust and the growing #deletefacebook movement might yet bring down one of the most powerful companies of this era; but we should not rule out the company finding a way to square the circle of maintaining a business model based on hoovering up personal data for ad targeting without losing the trust of its users. Facebook is fun and useful for many, and especially among its older users seems to have become part of the fabric of their lives, so we should not just write it off.

Mark Zuckerberg’s first response – his apology post of March 21 – was clearly inadequate and a rather poor example of the corporate ‘we will review our processes to ensure this never happens again’ genre. Facebook began as a ‘hot or not’ website, which is hardly a heroic social mission, and as early investors and executives have since admitted, quickly learned how to push peoples’ buttons to gain access to vast amounts of data that could then be used to manipulate them through advertising.

As is customary for modern billionaires, Zuckerberg then grafted on a vague personal social mission and began his listening tours and a foundation, but regardless of whether he is a decent person or not (he probably is), the real question is why should geeks, with their rather limited worldview, be designing social systems that centralise so much power over peoples’ lives, and how we can design better social systems to mitigate that.

Facebook was clearly complicit in the mis-use of its platform and user data pre-2014, but the real bad guys seem to be the various shady groups who have a truly malevolent goal to manipulate people for their own ends. In the USA and UK, a network of people supported by the billionaire Robert Mercer appear to have been working since 2014 (perhaps earlier) to hack the democratic system for the purpose of electing a new breed of right-wing candidates to power, or in the case of the UK, to fool people into voting to leave the EU. In a bizarre historical plot twist, the Russian state appears to have shared the same interests, and used Facebook in a similar way, with the goal of de-stabilising the West. The election of Donald Trump is the most visible result of this process, but we now know that the real operatives like Steve Bannon were using Facebook to test slogans such as ‘drain the swamp’ before he was even a candidate.

Cambridge Analytica are clearly bad actors, at the extreme end of an advertising industry that has leapt at the chance to upgrade its psychological manipulation toolkit thanks to Facebook and Google. There are undoubtedly others working in the shadows with dodgy political interests or state security services, and indeed this should not come as a surprise at all – state actors manipulating elections in other countries has been going on for decades; it is just that this time, the UK and USA are on the receiving end for a change.

Designing social systems and minimising unintended consequences

A more important question than who to blame is how can we design resilient social systems that amplify the good, not the bad in us, and do not centralise the power of networks and communities in the hands of those that might be tempted to mis-use it. It is quite possible that basic electoral democracy on anything bigger than a local level may no longer work in current conditions, because it evolved at a time when ‘nations’ were coherent social entities and everybody watched the same TV shows, read the same newspapers and broadly lived in the same reality, and it was much harder for a small group with money to hack the process.

It is equally possible that the new decentralised technologies such as the blockchain, into which so many people are pouring their hopes for a better future, may not survive their own bad actor problem if permissionless / algorithmic trust turns out only to incentivise exploits, rather than create a high-trust environment in which business can be safely conducted. Perhaps crypto currencies are literally the opposite of the famous Antwerp diamond trader case study of social capital.

There is a lot we can learn from previous examples of well-meaning modernists creating unintended negative social outcomes. In London, from the 1930’s onwards, local councils demolished whole areas of the city where people lived in strong communities but very poor quality housing, and put them into new high-density estates designed by architects who envisaged gleaming futuristic campuses of social mixing and shared living, and were probably quite shocked at the crime-ridden, dangerous sink estates they became.

As a student, I lived in one of the worst examples, and it was a funny feeling walking along unlit choke-point walkways hoping not to get mugged whilst imagining the beautiful architects’ drawing, where fellow citizens greeted each other as they passed by. It turns out that community bonds, social capital and bounded trust are powerful forces holding things together, and when you break them, or when you throw different groups of people together in sub-optimal environments, the results are often negative.

We already know about many of the social design issues with Facebook, Twitter and other dominant social media platforms (see for example this great piece about designing trusted digital services), but we are sorely lacking in solutions that can work at scale – and perhaps the quest for scale at the expense of utility is part of the problem here – but we need to start somewhere.

  • How can we apply dampers to the Twitter and Facebook mob / storm phenomena that seem to pop up with such regularity? Watching individuals suddenly thrown to the lions and the impact it can have on their lives is frightening, but so too is the collective stupidity these storms appear to whip up.
  • What do we do about opaque newsfeed or recommendation algorithms that can keep people trapped in a filter bubble, or worse, can take them down an increasingly myopic and even radical path?
  • What can be done to disincentivise trolling, anonymous coward keyboard warriors and the spreading of deliberate falsehoods to manipulate people? Of particular concern here is the unbelievable level of mysoginist abuse women receive from men online.
  • How can we balance the negative behavioural triggers (FOMO, addictive behaviour, etc) with positive psychology that empowers and encourages and does not damage mental wellbeing?
  • How can we make our online super-structures look more like cities or communities and less like high-density housing? In his apology post, Zuckerberg referred to the Facebook community, but in reality there is no such thing. There are so many interesting structural issues to consider in how we connect the individual and their network, the groups and communities they inhabit, and the wider world of the platform or the web itself. These are all different levels of scale with very different needs, and so we need to think about what mechanisms and structures are useful for each. The abuse of Facebook groups is something that is particularly pernicious, as this Buzzfeed article illustrates, since many people are totally unaware of the bait and switch activities many groups are engaged in.

The open web – the wonderful garden we were building before Facebook – is capable of evolving and adapting to address these issues, and is also subject to open competition, so people have a degree of influence over which sites or apps succeed and which do not. What we have with Facebook is effectively a monopolistic closed platform controlled by a few, which has the scale and financial muscle to kill off or assimilate every new trend, whether it is Instagram, WhatsApp or Snap.

Digital Transformation Consultation

It also has a tendency to blunder into areas like news, naively believing it can create a purely algorithmic solution to the distribution of news and information, which has traditionally relied upon human judgement to maintain quality, reliability and balance, as the Wired article I referenced above recounts in frightening detail. It is wholly unrealistic to believe that a company whose business model is to grab as much data about us as it can, and to keep us locked into a walled garden, can be anything other than what it is.

And with technology and society changing faster than most people can comprehend, the power wielded by the developers of new tools we quickly start to take for granted, and especially the power wielded by a monopoly like Facebook, should be used with much greater care for its long term social impact, and that might also mean some form of political or regulatory oversight.

Lessons for the digital workplace?

My work is mostly focused on how we can build better organisations using the affordances of social and digital technology to reduce the need for management control structures, hierarchies and ‘politics’ in the negative sense – spaces and places that are more human thanks to their use of technology to augment our intelligence and creativity, and where the system supports the people, not the other way around. Interestingly, most of these dark patterns and negative behaviours mentioned above are not seen quite so much within the internal digital workplace, because even at the scale of large firms with hundreds of thousands of people, there is enough commonality of identity, mission and goals to make it possible for real communities, real collaboration and real relationships to exist, and there are existing disincentives (such as employment contracts and governance) to behaving like a total jerk online.

As the digital workplace becomes more diverse and connected, I think we will learn through practice what ‘good’ looks like for individuals, small teams, departments and larger scale organisations. There is a role for privacy, even secrecy; there is a case for knowing who is who; there is also a role for dampers and constraints. We are not chasing vanity metrics, we are trying to be effective in coordinating work towards common goals, and so whilst the internal digital workplace still lags behind the consumer internet in many ways, I think there is also much to learn in this domain.

In Europe, especially in Germany, there is also a very strong bias towards privacy and responsible use of personal data. For example, even the EU-US Privacy Shield is not widely regarded as sufficient protection in the era of Trump, and many firms want to know their data is legally firewalled from US ownership, even if it is hosted in the EU. I think Europe will play an increasingly important role in helping to shape the norms of online behaviour and the limits of unfettered corporate power to change society. I hope these firms will trust their own people more, by improving their internal digital workplace and letting people escape the limitations of poor quality centralised IT, but also continue to protect them and their privacy against external threats.

In the short term, however, knowing what we now know about the impact of Facebook and the risks (to us) of its business model, I am curious what this will do to the adoption of Facebook Workplace, its internal communication platform, which has seen rapid adoption until now in companies that want a simple, cheap platform for its digital workplace. Would any European CIO really want to invite the Facebook algorithm, newsfeed, groups and filter bubble into their organisation, with all the potential for distorting social relations that we have seen on the public platform?

And with even people like WhatsApp founder Brian Acton calling for #deletefacebook, could we be seeing the tipping point for the public platform? If so, given that Facebook is the internet for many people, especially in poorer countries, how can the open web step up to provide an alternative?

Arrange a Conversation 

Browse

Article by channel:

Read more articles tagged: Featured, Social Media

Customer Engagement