There are two issues here. One is the Facebook business model. When you run the largest multi-datacenter consumer internet site in the world, your growth models are premised on offering billions of dollars of technology services for free, and you base your revenue models on advertising, you need to deliver effective advertising. The way to do that is to understand as much as you can about every user, so that you can deliver the ads they are most likely to click on. Those advertisers will recognize the higher response rates on your platform and pay you correspondingly higher advertising rates. Let's be very clear - THERE IS NOTHING WRONG WITH THIS. Companies have ALWAYS sought to know as much as they could about their customers - when you use your loyalty card at the grocery store, they give you a discount, because the information they are collecting about your purchasing habits is valuable to them.
The key to this discussion is that there are two kinds of data. There is PII - Personally Identifiable Data - and there is masked or aggregated data, where the trends and characteristics of a user or group of users can be analyzed and shared without exposing WHO those users are. Every company has PII in their internal systems. Whether it's payment data, address, banking, age, health, financial status, home ownership etc - it's impossible to work with people without collecting information about them. Social media is it's own thing - people, in the course of using the platform, tell the system a great deal about themselves, both directly (I have two kids and another on the way, I have four dogs, I live in Cleveland etc) and indirectly (Likes, Emojis, Quizes etc).
Google uses the information they collect to provide ad targeting services too. But there's ONE big difference. Google provides the targets themselves - no PII every leaves their possession to go to the advertisers. Facebook has at least one program where they will provide PII to the ad networks to do their own targeting. This is 100% of the problem, and will likely be reduced in light of the firestorm. But that's it - all the screaming about privacy and data protection and "selling" data comes down to nothing more nefarious than who is using the data to target the ads. It's not optimal, but it's really not the end of the world. And if you think it's sufficient reason quit using the Facebook platform, or if you think it makes Zuckerberg uniquely evil, you really should look hard at the other online platforms and tools you use.
The other issue is the ad content itself. How can they rapidly identify those ads that are problematic, that represent foreign electoral influence, for example? And is it even desirable for them to take it upon themselves to filter ad content? This gets really dicey. Yes, as a private company they are not obligated to provide unfettered speech a la the first amendment. But as a content and messaging company, they MUST retain credibility. If people feel their voices are being silenced, the platform itself will lose credibility, and thus users. And make no mistake, this would be a 'both sides' problem. If we demand that Facebook silence certain kinds of content, they're going to make absolutely certain that there is NO hint of political bias in the decision. That means they're going to silence at least one liberal for every conservative. And because content guidelines can never cover every eventuality, these decisions are going to be seen as crude and arbitrary, and there's just no way that a company the size of Facebook could provide sufficient resources to arbitrate every complaint.
The argument about "Fake News" is just a complete red herring. Yes, the Russians used Facebook (and Twitter and Instagram) to attempt to influence the 2016 US Presidential election - and they did so with both paid advertising and straightforward user postings. If you want to prevent foreign money from paying for these kinds of ads, great, but don't pretend that money can't be moved around in such a manner as to conceal its source. Russians won't purchase the ads directly, Americans will. The question of where the money actually came from, as with all campaign finance questions, will ultimately go unresolved. And, once again, if you want to empower Facebook to arbitrarily silence controversial users, don't get all weepy on me when they silence YOU too. They don't want to be in the censorship business, and if we force them to take on that role, they're, once again, going to make sure it doesn't look slanted, and that means silencing voices we want to hear.
At the end of the day, all consumers of political and public policy news should be skeptical. If you read something, and it's something you REALLY want to believe, that's the moment you should be extra-cautious about just accepting it at face value. Do your due diligence or accept that you're being lied to and used. Pizzagate was a stupid, irrational conspiracy theory with zero evidence to support it. People who believed it WANTED to believe it. People who took a moment to check other authoritative and primary sources quickly discovered the truth. This is called 'motivated reasoning' and it has NOTHING to do with Facebook or social media. Fox News is the greatest example of a propaganda outlet that KNOWS it's audience only wants to hear certain things, even if they are untrue.
So, does this mean there are no problems with Facebook and the larger social media ecosystem? Of course not. What it DOES mean is the current hysterical freak-out has much less to do with Facebook policies and processes and much more to do with our own frustration and sense of helplessness. I would think that the spittle flecked hatred of all things Zuckerberg will die down after the midterm elections, when there are more actual functional checks on Trump's madness. But ultimately, don't expect much to change, because there's really not very much to BE changed.
...
No comments:
Post a Comment