I’ve deleted my Facebook account (which apparently takes anywhere from 24 hrs to 45 days to accomplish) over their involvement with Cambridge Analytica. I’m not saying you should too, but you should think about it. For me it came out of distaste at colluding with a large company whose sole purpose is to collect personal data and monetize it. The data is being collected for current but primarily future AI applications – I think there’s little doubt about that. Also Facebook has known about its privacy and trust problems for a long time and has been warned internally about it but chosen to do nothing until they got caught.
At an ethical level (rather than just moral squeamishness on my part) I’ve been advocating for greater ethical scrutiny of matters touching on AI, so it would have been hypocritical to let this pass. That made it an easy decision on my part (though still painful) but is somewhat unique to my situation and past statements and doesn’t necessarily apply to any of you.
But at least it’s a good moment to pause, consider and exercise our ability to think ethically. What is the case for staying with Facebook? What is the case for leaving Facebook?
Here’s my first go at an ethical case:
Facebook helps me connect with family and old friends in a convenient way. I might lose these relationships if I delete Facebook.
As a Microsoft MVP, I’m judged on my reach and influence. Deleting Facebook amputates my reach and consequently my appeal to Microsoft as an MVP.
Facebook is used to organize people against repressive regimes in liberation movements. This is a clear good.
Cambridge Analytica was not an isolated instance. Facebook was a poor steward of our personal information.
Facebook was warned about its policies and didn’t act until 2015 to review its internal practices.
This is Facebook’s core business and not an aberration. They are collecting data for AI and had a strong interest in seeing how it could be analyzed and used.
Participating in Facebook gives them access to my information as well as that of my friends.
Their policies can change at any time.
Participating in Facebook encourages family and friends to stay on Facebook.
The loose relationships formed on Facebook encourage and facilitate the spread of false and/or skewed information.
Facebook served as a medium for swaying the 2016 U.S. elections through bots and Russian active measures.
The harm outweighs the gains. The potential for future harm also outweighs the potential for future gains. Also, I am co-implicated in present and future harm promulgated by Facebook through continued participation. QED It is ethically necessary to sever relations with Facebook by deleting my account.
But just because something is ethically wrong, does that make it wrong wrong?
Alasdair MccIntyre proposed in his 1981 book After Virtue that we post-moderns have lost the sense for moral language and that worldviews involving character, moral compasses and virtue are unintelligible to us, for better or worse.
Malcolm Gladwell makes a contiguous point in a talk at New Yorker Con about how we justify right and wrong to ourselves these days in terms of “harm” (and why it doesn’t work).
In a past life, I used to teach ethics to undergraduates and gave them the general message that everyone learns from Ethics 101 in college: ethics is stopping and thinking about not doing something bad just before you are about to do it anyways.
Along those lines, here are some arguments I’ve been hearing against acting on our ethical judgment against Facebook:
1. It will not really harm Facebook. If a few hundred thousand North Americans leave Facebook this week, it will be made up for by a few hundred thousand Indians signing up for Facebook next week.
2. Facebook is modifying its policies as of right now to fix these past errors.
3. Facebook has just severed relations with Cambridge Analytica, who are the real culprits anyways.
4. Since 2015, you’ve had the ability to go into your Facebook settings and change the default privacy permissions to better secure your private information.
Which are legitimate points. Here are some counter-points (but again this is just me doing me while you need to do you. I’m not judging your choices even if I sound judgy at the moment):
1. Leaving Facebook isn’t primarily about imposing practical consequences on Facebook (though if it did that would be gravy) but rather about imposing consequences upon myself. What sort of person am I if I do not leave Facebook knowing what they have done?
2. It isn’t even primarily about what bad deeds Facebook has committed in the past but rather about what those actions and policies say about who Facebook is today. Facebook is, to use a term the author Charles Stross coins in his talk Dude, You Broke the Future, a slow AI. It is a large corporation which has an internal culture that leads it to repeatedly follow the same algorithms. There is no consciousness guiding it, just algorithms. And the algorithms collect widgets, in economic terms – which in this case is your personal data. It can do nothing else. Eventually it wants to turn your personal data into cash because of the algorithm.
3. In the moral languages of the past, we would call this its character. Facebook has a bad character (or even Facebook is a bad character). Having good character means having principles you do not break and lines you do not cross.
4. I want to wear a white hat. To that purpose, even if I can’t stop a bad character, I don’t have to help it or be complicit in its work.
Switching gears, somewhere between World War I and World War II, we probably lost the proper language to discuss ethics, values, morals, what have you. There are artifacts from that time and before, however, that we can dig up in order to understand how we used to think. Here’s one of my favorites, from the e. e. cummings poem i sing of Olaf glad and big:
Olaf(upon what were once knees) does almost ceaselessly repeat “there is some shit I will not eat”
This is what I would say to Mark Zuckerberg, if he would listen, and even if he won’t.