Chris: Hi, I’m Chris Hofstader
Francis: Hi, I’m Francis DiDonato
Chris: and this is Episode 11 of the Making Better Podcast, featuring journalist James O’Malley. James is a UK independent journalist, he’s published in many different UK newspapers, he runs the Pod Delusion podcast which is really excellent, we recommend you check it out. He’s also the founder of the TrumpsAlert Twitter feed, which tracks everything the Trump family posts online.
Francis: I think Mr. O’Malley is a good example of someone who’s trying to fill in those gaps in journalism that have occurred because of globalization of information as it’s become.
Chris: So with that said, let’s get on with the interview.
***
Chris: James O’Malley, welcome to Making Better!
James: Hiyah
Francis: Hi, this is Francis DiDonato, in the House!
Chris: So James, you’re really well known for doing a whole lot of different things with Twitter, and in fact you even had one of your tweets quoted by Steven Colbert…
James: [laughs] I remember that, yeah…
Chris: Why don’t we start with how you got to be who you are, and move on to TrumpsAlert and things like that.
James: Sure. So, my name’s James O’Malley, I’m a freelance technology and politics writer, I’ve been a freelance journalist for several years now. I was editor of Gizmodo UK, the sort of UK spinoff of big tech website Gizmodo, until last October [2018], but other than that I’ve written for a whole bunch of other publications, mostly in the UK so I don’t know how familiar I’ll be to listeners. Places like The Spectator, The Telegraph, The New Statesman; I did a Guardian piece; I’ve done a bunch of tech websites, Tech Radar, Engineering & Technology magazine, British Computer Magazine, loads of stuff like that, and that’s what I do professionally. Other than that, I waste a lot of my life on Twitter and I’ve built some bots and done some funny things there as well.
Chris: For users who might not know what a Twitter bot is, can you explain it, kind of fundamentally?
James: So basically, a Twitter bot is a Twitter account that is not run by a human being. All the tweets are posted by a bit of computing software. So, for instance, the bot I’ve built and the one that’s been most successful is a bot I built called “TrumpsAlert.” What this does, it monitors the Donald Trump family—so, Donald Trump himself, Don Jr., Eric Trump, Ivanka Trump, as well as KellyAnne Conway. And what I’ve written some code to do (it runs on a server I’ve got somewhere), every few minutes it checks to see if any of these people—these hugely important, influential people—have liked any new tweets, or if they’ve followed anyone new, or if they’ve unfollowed anyone. And if it does spot that one of these things has happened, it will then just send a tweet out automatically. So I guess that’s a sort of practical example of one thing a bot can do. But yeah, Twitter bots more generally do all sorts of interesting things. One of my favorite ones—I can’t remember the name of the account off the top of my head—but someone set up a sort of aeroplane scanner at Geneva airport, and they wrote some code which basically looks at all of the aeroplanes being detected by this scanner and compares it to a list of planes that are owned by dictators, and if it spots any dictators coming in to land in Geneva, it will then tweet out and say, look, this horrible dictator from this country has landed in Geneva, and it’s just an interesting way of keeping tabs on things that way as well.
Chris: With the evidence being pretty obvious that there was foreign manipulation of the US election in 2016, using a lot of these bots, how does the average man on the street be able to figure out whether or not it’s a legitimate post, or whether it’s something done robotically to try to manipulate things?
James: Yeah, I think this is a really sort of interesting and sort of fundamental tension with how Twitter, especially, as a platform worked. So basically the way Twitter works is, Twitter the company have Twitter the platform, and then they provide all of these tools that are open for anyone to sort of go and build their bots and access the Twitter data basically. There’s a million sort of legitimate reasons you might want to do it, say you’ve built an app that wants to use tweets, or say, like me, you build a bot that you want to post, so something like that. The trouble is, they found before the election, this can be easily abused. So we got Russian troll farms or whoever building bots which would then post fake news and spread misinformation and that sort of thing. And so the sort tension there is that Twitter sort of have to figure out a way to enable legitimate uses and useful things, which I think enhance the Twitter experience, even if it’s something as simple as a news website wanting to post links to its newest articles automatically or something like that, and then balancing that against making the product secure enough so that you can’t have people posting loads of nonsense tweets to try and swing an election. In terms of differentiating between the difference, I think there’s a sort of almost like a media literacy, as a society, as a culture, we need to get better at. I think young people are a lot better at this than older people are. In the old days, it used to be that you’d get a newspaper and you could judge whether the information contained within it was credible or not based upon the reputation and the prestige of the newspaper. Well obviously, back then because printing newspapers was hard to do, and you had to be well-resourced to do it, if you work by the heuristic that if you do a newspaper, surely it’s had someone who’s gone through it and checked it and has done the work to make sure this is true, because they wouldn’t want to print any false information. With Twitter, because it’s so easy to post information, whether through a bot or through an individual, that heuristic no longer works when we have to have a different way of understanding information and processing it in order to make judgements. And I think young people are a lot better at that, because we were growing up with the internet and because we’re used to seeing a million different contradictory sources and not necessarily being clear where the provenance of a piece of information is, and that sort of thing. I think ultimately, the way to do it is—it’s not going to be solved by machine, I don’t think you could write a piece of code, I don’t think Twitter or Facebook or whoever could write a bit of code that says, “only favor or publish or share this verified, correct information,” because of a scale problem in doing that. So ultimately it’s going to have to come down to us as a society and a culture learning how to do it, asking the right questions. So the sort of thing I always do is, whenever I see a tweet or a claim printed somewhere—especially when it sounds too good to be true—so, I’m a really passionate “Remainer” in the Brexit debate here in Britain, so whenever I see a tweet or a bit of news that someone said, “oh it turns out that the Leave side have done something really awful and evil”—but rather than hit that “retweet” button, because it’s my team that win if that information gets out there, I always take a step back and think, well how do we know that? Who is saying that? Where is that information coming from? And just sort of taking a brief moment to step back and just think through logically how something like that can happen. And that’s something that we need to think more, work harder to do, I think.
Francis: In our country, the corporate media had been accused of intentionally dumbing down this country. And I guess with George W Bush we thought that it couldn’t be taken any further, but I think with Trump’s tweets its—almost like a cartoon, like when he tweets it needs a bubble and a cartoon character of him, because it’s just that idiotic and simplistic a lot of the time. But he manages to circumvent the media, and there’s an attempt by social media to replace journalism, but I don’t see it working, and as you, as a journalist—I would be curious to know what you think of the state of journalism and how social media has kind of taken over as a source of information to people.
James: I feel very conflicted about it, because there’s sort of two ways you can look at it. Because on one hand, we do have all of the problems that we’ve identified today, like you’ve just outlined. I think Donald Trump presents a almost unique problem, in that anything he says is intrinsically newsworthy, even if he posts any old nonsense, the fact that he’s saying that as President of the United States makes it something that journalists should cover and report. And so that is a unique challenge there. I think—the counterargument, though, is that if you imagine the way journalism was years ago, I don’t know if there was ever sort of a “golden age” of—I know we think of Woodward and Bernstein and all that sort of thing—but if you look more broadly at the power structures in society and in journalism back in the day, as it were, it was very different in a negative way than it is now. I mean, I’ve only got a career in journalism, to use my own personal story, because of Twitter and because of social media and because of blogging and getting into it that way, and sort of being able to use that as a way to accelerate my content out there and get my name out there and network, and sort of get my way into the journalism industry. If I’d tried to do this before the dawn of social media, certainly before the dawn of the internet, these doors would have been much more closed to me because I’m not from an especially privileged background, I’m not from a particularly disadvantaged background, I guess—my parents are sort of,…I went to a state school in a small town. The problem with journalism is, even today, it’s a very, very middle-class occupation, and I mean that in the British sense of it being, essentially high class. It’s all people who went to private school, who are well-connected, whose dad who also works in journalism and got them a job where they could work for six months for free as an internship to get in there and that sort of thing. I never had those sorts of connections. If you imagine how journalism was even more like that back in the old days, with fewer routes for people to see in, then that also sculpts the way that we see the world through journalism and the sort of reporting that people would see as relevant. I mean, you know, the really obvious examples of this are all the social progress we now, all the reporting we about the importance of even—I don’t want to say trivial, that’s the word I’m looking for, but even stuff like why it’s important to have female superheroes or even something like that—if the journalism establishment was the same as it was 50 years ago, that would obviously never have been part of the conversation, because of the people involved in creating that content in the first place. So to answer your question, and sorry I’m rambling on a bit, is there’s not one journalism. It’s hard to sort of go, it’s all good or it’s all bad. There are people doing some really good things, especially in new formats and so on, there are people doing really bad things. For every person writing an amazing ten thousand words New Yorker piece going into immense detail about the subject and really taking it apart and doing that, you’ve got people putting out nonsense as well.
Francis: What are your sources of good information?
James: Because I spend most of my life on Twitter, there’s not like, one publication or one outlet I’d point to as where I read as an authoritative source. I tend to look at individual journalists, and their records, especially. Again, because Twitter has sort of changed the landscape of how it works, you can now see there’s various publications where you know if it comes from one writer from that magazine or that publication, that’s a credible, well-sourced story because from another you sort of understand the biases of it, or where that person could be coming from, and that’s really granular detail, which is probably far beyond someone who’s not a complete nerd about this thing as I am, but it’s more of the thought of the methodology of understanding how the information might have come about, why that person would have obtained that information, and then just asking some basic logical questions about whether it’s true or not. And then maintaining a skepticism until you know about it rather than just going out there, is the best way to approach things. I don’t think you can go, oh, if it’s in The Economist it’s true, or if it’s in the Guardian it’s true, or whatever else, or if it’s in the Daily Mail it’s false. That’s sort of a really reductive way of looking at it, because all that lets out their good points and bad points and blind spots and whatnot.
Chris: How much do you know about the algorithms used by Facebook and Youtube and whatnot to decide what to show you next? Cause if you start with a completely clean account and go on Youtube and search on “US House of Representatives” about eight clicks later, if you just follow the “up next” you’re on a flat-earther website.
James: This is ultimately the problem with algorithms, in that they’re black boxes which nobody knows exactly how they work. You could say, oh well one solution to this could be, we could pass a law that says all algorithms must be transparent. But the problem is, the algorithms, they’re the secret sauce as what makes these products and these companies successful. Google wouldn’t want to tell you how their search algorithm works, Facebook doesn’t want to tell you how their news feed algorithm works, for good reason, because that’s their source of competitive advantage. Because they know that, why having their algorithm behave as it does, that ultimately benefits them as a company, and our [enya?] and benefits us as consumers to have these companies providing content that we like, I think, to a certain extent. Youtube, I think, is a particularly fascinating example, and the best theory I’ve heard on the Youtube algorithm, as to how it works—it’s all driven by machine learning now rather than a human level of looking at view counts or whatever. My understanding is, and I could be talking completely nonsense—so again, this is a good checklist, is a good opportunity to sort of review the source you had the information from and consider whether it’s nonsense or not—but what I’ve heard or what I read somewhere, and again I do recommend fact-checking me on this, is that Google basically said to its machine learning algorithms, “we need you to increase YouTube watch time. So, do whatever you can with users to increase watch time.” This sort of frame. So Youtube would then, because millions and millions of people go on that, is conducting thousands of mini-experiments every second, so if you go on there and you watch a video to the end, that’s really good, because then [*] that’s good for watch time. If you clicked for every suggest comes up next, is the next video to watch and then you watch it, that’s a really good example of that, whatever video comes up second, is clearly one that people want to watch, and that would then boost it up in the recommendations of everyone else. So it’s almost like a feedback, it’s literally a feedback loop, isn’t it, of recommendations that way. And so that’s why you get the sort of, you know, you can go down the YouTube rabbit hole, start with something sane and end with somewhere crazy. One of the reasons they discovered this was because more extreme views are more provocative, so more people are more likely to click on it than something middle-of-the-road. So, you start by saying, you start with…something in the center or something fairly moderate, but then you see someone…let’s say, you watch a video about the immigration debate, or whatever. Then you see next video suggested as “Idiot Daily Mail columnist says that we should have a points-based immigration system.” I think that’s a terrible idea, I disagree with that, but ultimately that’s a reasonable sort of view someone can have. So you click on that, and you go, “oh look there’s idiot Daily Mail columnist expressing that terrible opinion.” But then at the end of that you see “YouTuber who nobody’s heard of who has an avatar like an ancient Egyptian symbol or something says that immigrants should be banned” and you click on that, and you think “what, could he really believe that?” and then, you know, ten clicks later, because you…it’s a psychological thing, isn’t it, you end up watch flat earth videos and think “ how did I get here.”
Francis: Is that called “click bait”?
James: I think click bait’s a weird phrase, because it became a bit like how “fake news” was originally a descriptive term for literally falsified news stories that were published in order to get advertising revenue, and then it was appropriated by, well, Trump along with everyone else, just to mean “story I don’t like.” In the same way, I think “click bait” is a word which is basically, you never hear it said in a positive way, because it only ever means “thing I don’t really like”…As a journalist, I’ve had tons of stories I’ve published that people have just gone, “oh, what you’re doing writing this clickbait? Oh, clickbait!” Whereas if it’s a story people like, nobody ever goes “oh that piece you wrote, which was really good, yeah total click bait.” I don’t think there’s anything necessarily intrinsically wrong with the concept of click bait. If you’re writing an article, you want people to like i. The problem is when, you know, the headline or whatever distorts the story out of all recognition or you start bullshitting in order to get people to click on it. That’s not click bait, that’s just lies. And click bait isn’t necessarily a new thing with the internet, I mean, newspaper headlines—I don’t know about in America, but in Britain, tabloid headlines for 50 years have been essentially clickbait, they’re all trying to get you to buy the paper, it’s just when things are published on the internet, so…I think click bait can be good.
Chris: One of our previous guests was Richard Stallman, who you probably know of at least through the Free Software world, and he was talking about this social-credit system in China. I have to admit, it’s not something I know much about, but you’ve been writing about it lately, so if you can give us an intro to it?
James: Yeah, sure. So, I’ll go into it with an anecdote. So I went to China last October [2018], just on holiday, and we took the train from Beijing to Shanghai, and when you’re on the train what happens is, you know when you get on a train usually it says “this is the train to (destination) and we’ll be stopping at X, Y and Z, and it did all that. And it was like, this is a train to Shanghai, stopping at the various intervening cities; and then the announcement came on and said “Please respect the rules that are on the train (I’m paraphrasing, can’t remember the exact thing, but it said) Please respect the rules on the train, if you don’t obey them, it could harm your social credit score.” And what this is, it’s a reference to a number of different systems that are being trialled across China, the sort of popular conception of it is that everyone in China will be given a score, a number, hanging above their heads virtually, which their behavior can impact. So the idea is, you do something good, you earn some extra credits, you do something bad, you lose credits, and then the number of credits you have can affect your ability to function in China and access services and so on, and may even lead to you being publicly shamed. The reality is, it’s slightly more complex. So basically, social credit is not a unified system or idea yet, there’s loads of different trials that loosely fall under the social credit example. So, different cities are trying different things, and some of them are just sort of crude blacklists of people; so, if you don’t pay your court fine, you end up on a blacklist which, in terms of the sort of social credit system, and then being on the blacklist might mean you’re not allowed to catch a high speed train, and you can only catch the low-speed train. Or, you can’t catch a plane, you’ve got to get the bus, and stuff like that. So there’s sort of systems that are involved in local government and that sort of thing, and one of my favorite ones—favorite in a sort of perverse “oh this is weird and scary” sort of way—is, I think it was Shenzen (again, Google this, don’t trust me blathering on about this), where they were punishing jaywalking. So if you crossed the road when the green man isn’t on display, it would use facial recognition cameras to identify you, and then would send you a fine automatically. Anyone who was detected by this system would then be publicly shamed by having their face displayed on the video billboard by the sides of the road, and they were supposed to incentivize good behavior. But again, that’s only sort of one system that’s being trialled in one place. The other technology which is being covered under this sort of social credits umbrella is a system called “sesame credit.” (I think it’s called Sesame credit), Basically it’s run by Ali Baba, which is like the Chinese equivalent of Amazon and EBay all rolled into one company, and there it was basically trying to create a scoring system to prove your credibility. The big problem with China is, not many people have bank accounts. And this ultimately is what underlies a lot of the motivations to create a social credit system. I think something like 20% of people have bank accounts, so if you want to have people interacting with digital services or even just government services, or you know, just doing a business transaction, you need a sort of another way to figure out if someone’s credible or not, because you can’t just run a credit check or something like that. And the idea is that, draws on other things like behavior to prove your reputable. This Sesame credit system, which is linked up to Ali Baba, does this sort of thing as well—so it looks at your purchase history and sort of judges your creditworthiness but also uses a number of other factors. So, for example, if you’ve got a number of verified friends on the services who have also proved their worthiness, that inherently improves your worthiness, because it suggests you’re not like a spam account or a scammer if you’ve got loads of credible friends. And there’s various other factors it can roll into this, and then once you get your score, this can unlock other different services and privileges whether it be taking out a loan—there was one, I think there was a trial where you could unlock basically a free umbrella when you’re leaving the subway station, so if it’s raining and you’ve got a sufficiently high social credit score, you can pick up an umbrella for free. Cause it can prove your worthiness or your legitimacy, and the other big link-up is with, it’s called Mo-bike, the kind of bicycles you hire using an app. Basically, instead of having to pay a deposit, because you’ve got a sufficiently high social credit score, you can take it out without needing to prove yourself or put any money down for it, you can rent bikes that way. This is where we are at the moment, and there’s all these different trials being trialled in all of these different cities, different rules all over the place. One city is punishing misbehavior—a misbehaving dog, and your’e not keeping it on leash or whatever, you’ll get punished for that, all sorts of different behaviors. So the big fear is, and the reason this has sort of become hyped in the West—and I think it is quite pernicious—but obviously, the theory is, and the government has basically said as much, they want to sort of create a national, unified, social credit system in the next few years so that any arm of the Chinese government would essentially be able to check your social credit. Obviously, in a totalitarian society like China, where you’re…it’s very easy to imagine how something could be abused. If you’re seen at a protest holding a sign, that’s going to be very bad for your social credit score. If you do something else the Party don’t like, that could hurt your score and hurt you that way, and then prevent you from catching a train or being able to work or something like that. And it is a very blunt way of aligning every incentive in your life, conceivably, with the incentives the government want to promote.
Chris: Did you see the Black Mirror episode about that?
James: No, I haven’t seen it, the Black Mirror, but I’ve had literally thousands of people tweeting me, suggesting I watch it, and I still haven’t got round to it.
Chris: James, we’ve had a number of other skeptics on the podcast, and you and I first met at the QED conference and your former podcast, Pod Delusion, won a couple of Occam Awards—we’ve had Michael Marshall on, we’ve had Haley Stevens on, Jennifer Michael Hecht, and now we have you, so four people I’ve met at QED have been on the podcast.
James: Excellent. Big fan of Marsh and Haley, I’m afraid I don’t know the other person, but Marsh and Haley are both excellent.
Chris: QED conference on science and scientific skepticism that goes on every year in Manchester, England—James, maybe you want to speak a bit to skepticism as a concept?
James: Back in 2009, I started a podcast called “the Pod Delusion,” punning on the title of Richard Dawkins’ The God Delusioin, and the idea was that it was a magazine show that would cover a wide range of topics. Basically I engineered it so I could talk about whatever I wanted every week, and it was loosely a sort of unifying philosophy behind it was a kind of skeptical, rationalist point of view, so taking a scientific world view and very much much existing in the skeptics movement as it was then. It went on until 2014, and the format of the show was in [tooking] contributions recorded by literally hundreds of other people, and I was the sort of presenter figure linking together all of these different segments that people had produced. And it was really good fun, I really miss making…Chris did a few different segments for us. I like to think at a certain point it was sort of like the house magazine of the UK skeptics scene, for a little while, because it got a fairly decent listenership and it was covering all of the different skeptics events going on, so Skeptics in the Pub, QED, and so on. And also the sort of adjacent movements, so like humanism and not science movement but, you know, professional science promotion type things, all that sort of good stuff there. I’m still a skeptic, since then I certainly haven’t changed my views on many of the core—using the world “beliefs” in skepticism is a very odd thing to do—but I certainly haven’t changed my views on, for example, the existence of God or the usefulness of the scientific method or how we should take a naturalistic world view. No, I think as a movement I think it’s faded away, but I always think back to that sort of time, around 2010, 2011, when skepticism seemed to become a really tangible big deal, in that it was having sort of policy victories, it was having cultural victories, and there seemed to be a sort of movement of people sort of coalescing around the idea of being skeptic, and it became a label that people would organize around. I always think it’s a bit like Britpop. I don’t know if you remember Britpop, this was sort of a cultural movement in the mid-90s, it was in Britain, I don’t know how it was perceived in North America, but basically you had bands like Oasis and Blur writing the soundtrack to it—but it wasn’t just the music, it was about the broader culture. So you had Euro 96 big football tournament on the television with an England team that were performing really well, you had Tony Blair on the cusp of entering 10 Downing Street, ending Tory rule and bringing back some optimism and hope, as it was then, as weird as that is to imagine now with Tony Blair. And so it was sort of a cultural coalescence around Britpop as a thing. I think skepticism’s much the same, because you had the God Delusion being published, bringing lots more people into the movement, you had people criticizing the likes of alternative medicine, you had humanists, you had scientists all working together around the idea of taking evidence based approach to things. I think now, it has changed. I don’t think skepticism is necessarily a label I would choose to align myself with, in that, just because of the connotations attached to it…that perhaps weren’t back a few years ago, and..as a sort of organizing principle, as a sort of, almost like, as a word people organize around, I don’t think it’s got quite the same potency as it once had, because obviously in combination, we’ve seen the skeptics movement itself sort of splinter over various issues around social justice and so on. We’ve seen half the American Skeptics become weird libertarians, and then obviously, not unlinked to that, is actually all of politics going to hell? Everything we’ve seen with Trump and Brexit and the rise of sort of anti-pluralist ideas and the rise of totalist sort of ideologies once again—so to answer your question, again I’ve gone on for a very long-winded answer, my views are still essentially underlined by the same principles, but I think as an organizing moment…
Chris: The word “skeptic” is a hard word to work with, because flat earthers call themselves “skeptics,” climate deniers call themselves “skeptics”…I think Richard Dawkins suggested we call ourselves “brights,” I didn’t like that one…
James: This is the problem, there’s no sort of perfect word, I mean we have the sort of challenge now. I’m a trustee of a charity called Conway Hall Ethical Society, based in central London, which again, is sort of tangentially linked to all of the skeptics and skeptics movement and stuff, it’s where, it’s basically a atheist church from the 17 and 1800s. But the problem we have there, and again I’m speaking entirely with own personal views here, not on behalf of the organization or anything like that, is what are we organizing around? And you look at all the old alternative words, so can you be a skeptic? Well, yeah but there’s obviously negative connotations with climate change deniers and like all of that. Freethinker, that’s a nice word, I really like “free thinkers,” the Victorian free thinkers, that’s a really great tradition to try and align ourselves with, but then you get alt-right nutters calling themselves freethinkers, which is like, an association you definitely don’t want. And then you think, well, what about humanists? But then someone inevitably goes, but that wouldn’t care about animal rights.” What about religious people who believe in a scientific world view? For …deists or something like that, so there’s never going to be a perfect word I don’t think.
Francis: Part of the impetus of this show was to re-imagine all these terms, because to talk about capitalism, communism, all that stuff right now, seems to be very unproductive. You could take someone who is like a really, really wonderful, altruistic person, put them in a capitalist society, and they’re going to behave differently than someone who is a totally narcissistic sociopathic creep like Trump, and put him in a capitalist society. So, it’s like we got to move beyond that and figure out how to make things work in an optimal way for the most amount of people.
James: I think writing off the concept of socialism or capitalism wholesale is quite a tricky thing to do. I often think back to something the writer Nick Cohen wrote, the book called What’s Left—he published this book in about 2005—but the line that for some reason sticks in my head is, he said that maybe utopia won’t look particularly different to how it looks now. And the trouble with saying something like that is that you obviously then, there’s the obvious rebuttal of, “but what about x, y and z terrible things in the world” which are going on which you can’t obviously deny. But I think the value in thinking something like that is, maybe we don’t need a radical ideological project to completely reconfigure society. Maybe we don’t need Soviet Communism, that was an enormous experiment that had disastrous consequences. If you look at neoliberalism, whatever the maximal extension of capitalism will be, that is also ultimately a sort of grand ideological project which we’re still experiencing the consequences of. My sort of increasingly boring opinion—and I used to think I was fairly left-wing, or very left-wing, and then Jeremy Corbyn happened here in Britain. But my sort of more boring center-left opinion now is, well maybe we should look at what we’ve got, what institutions we’ve go, especially when you look at the landscape of Trump and Brexit tearing down all of these liberal institutions we’ve got. Maybe we should appreciate there’s been quite a lot of work over the past several centuries establishing these various norms that we now take for granted, like that freedom of speech can be a thing, and globalism and that sort of thing. And so maybe we should think more about boring social democratic tweaking of the system we’ve got. I mean, my favorite presidential candidate, to put this into more context, is unsurprisingly, Elizabeth Warren, because she’s again talking about sort of structural reforms. She’s not a timid centrist technocrat trying to turn the knobs a little tiny bit, she wants big structural reforms, but she’s putting detail in there and putting it in a way that she’s actually outlining a program of reform and then the outcomes that she would expect to see from those reforms, in a relatively technocratic way, which seems sort of realistic and appealing. Whereas if you look at someone like Bernie Sanders or Trump or Corbyn or Bolsonaro or whoever who are just saying, tear up the whole thing, and it will better somehow. That just seems like a fairly ill-fated approach. I think really boring things that we’re eventually going to learn, and maybe I am just getting more centrist in my old age, is that ultimately we’re going to miss a lot of the institutions that we’ve got when they’re gone.
Chris: I’m also supporting Elizabeth Warren, for primarily the same reasons. I mean, she’s…speaks so specifically to what she will do, whereas a guy like Bernie Sanders says, you know, “we’ll have free college education for all and I’ll tell you how we’ll pay for it after I’m elected.”
James: I don’t mind Bernie Sanders…I mean, as anyone who’s read my Twitter will understand, I really dislike Jeremy Corbyn, and obviously he’s often bracketed with Bernie Sanders because they’re sort of radical leftists relative to the presupposed political settlements where they are. But I think they’re very different people, in that while Bernie Sanders, he does use a lot of radical language, you know, he literally talks of “political revolution,” he’s still more moderated and still more measured and still uses a lot of the same axioms that we expect to see in a sort of stable political system. So, this is a sort of random example, but I think on various foreign policy things, I’m pretty sure like Iran or something like that, Bernie Sanders wouldn’t be too far away from what Elizabeth Warren would say. He’s not going to say, let’s do a war or he’s not going to say, well let’s be best friends with Iran, or something like that, whereas you look at someone like Jeremy Corbyn—he’s from a much more radical tradition, from a very different political tradition where, you know, he doesn’t seem to have any problems buddying up with autocrats and dictators as long as they profess to be left-wing, which is why I’m sort of a Corbyn skeptic, to say the least. But yeah, the differences and the reason I massively prefer Elizabeth Warren is because obviously, she comes across as someone who’s done the reading. Bernie Sanders is very much, like you say, we’ll sort it all out, we’ll worry about the details later, but we’ll do something; whereas Elizabeth Warren, from the programs she’s laying down, I mean I don’t think all her ideas are perfect, I know the big one was breaking up the tech companies—I think emotionally, that’s a very appealing thing. I’m not entirely sure whether her stated policies will actually deliver the supposed outcomes she wants. But the fact that she’s speaking about it, and the fact that she’s proposing actually plausible things that could be done, I think that’s sort of refreshing and detailed. But I would say that, ‘cause I’m a very nerdy man [laughs] who likes detail and likes that sort of thing rather than just brash sloganeering.
Francis: The defense budget is just so huge, just imagine what that would cover. Student loans would be nothing, that would be like the cost of probably a few percent of the defense budget.
James: I’ll tell you the weird thing I find—maybe as Americans, or North Americans you can shine more light on it—is that Bernie Sanders calls himself a socialist, Elizabeth Warren says she’s a capitalist, but functionally there isn’t that much difference in the sort of outcomes they want. It’s so weird that Bernie Sanders is sort of framed as…I mean I’m pretty sure he doesn’t want Soviet-style socialism or a sort of extreme form of socialism, I’m pretty sure what he wants is basically Social Democracy, unless I’m radically mistaken, in the same way that’s more in the direction of what Elizabeth Warren wants, which is very different from what I think a lot of people who have Cameron Picknell avatars on Twitter think Bernie Sanders stands for. Or maybe I’ve got him wrong, maybe he is much more radical than I give him credit for. The one phrase I think is incredibly smart, and I apologize if this is a bit tangential, is something Mayor Pete said, and obviously he’s more of a centrist candidate than many of the others—the phrase he came up with, someone said like “are you a democratic socialist?” and he said, “no, I’m a democratic capitalist.” And I just think whoever becomes the final nominee, we should appropriate that phrase, whether it’s Elizabeth Warren or Bernie Sanders or whatever, because surely that solves the sort of linguistic challenge of selling socialism, social democracy, moving the the left economically, with Americans who may think, “oh no, but we’re capitalists, and we want to be capitalists” and all this sort of thing.
Chris: I’d like to go back to the notion of post-scarcity and how either capitalism or socialism can handle—what if in 30 years we have 80% unemployment?
James: Yeah, I can’t claim to have thought in particular depth about this, but I think that if we assume that post-scarcity is a thing that’s gonna happen, or certainly we’re going to get to a point where there is a lot of technological unemployment, the solution isn’t to go down the Trump route of “bring back manufacturing” by what seems to be the President calling in personal favors from executives to keep factories in Ohio open, or something like that. The solution is to look to ideas like Universal Basic Income. I’m sure there’s critiques of that that I’ve not read in depth, but in principle that seems like an appealing idea. But also, I think there’s a lot more that could be done if society, certainly American society, and indeed British society, were to take more of a social democratic turn. You could offer retraining and accessible education throughout someone’s life so they can retrain and so that seems like a much smarter solution to this problem.
Chris: I mean, as automation takes over people’s jobs, I mean…
James: This is why you need UBI…
Chris: My sister’s husband was a mortgage broker, and he’s been replaced by an app.
James: And the other reform we’re going to need, and this is presumably why nobody really wants to talk about it, is because at the moment most taxation is income-tax based, you know, taking a proportion of your income that you get every month, whereas instead if you had a wealth tax, or taxed the super-rich more, you could then have more to redistribute. I mean, the really startling thing—I’m going to generally assume that the sums were done correctly—but I think it was Elizabeth Warren’s student loan program, you know she wanted to abolish all student loans. The maths on that, she said it pays for that, and then again I can’t remember the exact detail, but it was by adding a wealth tax or increasing the taxes on the top, by a sort of minute amount, and then you’d just wipe out all student debt because things are that massively imbalanced. Which is just crazy, considering the size of the figures we’re talking about, but ultimately it’s going to be about figuring out what the taxable thing is in post-scarcity society—whether it’s wealth, whether it’s, there’s been talks about robot taxes—sounds ominous to me, because surely while robots is a good thing, we’re going to have move away from just straight up income tax, I guess.
Chris: The way budgeting and money is spent in the United States means that the Congress will often pass military spending bills that the military didn’t even ask for, because the weapon will be built, or the weapon system will be built in their district and it means 2,000 jobs or something like that. They’ll insist—one thing that’s not even military that’s kind of interesting is NASA’s Orion rocket project. It cost $20 billion for the whole project, and then it’s gonna $2 billion for a non-replaceable rocket every time we send it up. Meanwhile, we could be using a Falcon Heavy for $100 million per use, and it’s reusable. A group of congressmen called the Alabama mafia, who all have huge space and defense construction going on in their districts, are all insisting on this money to keep a bunch of people employed on a boondoggle.
James: Which is insane, isn’t it? So at the risk of defending the military-industrial complex, which is not something I expected to do this evening, I think the thing to think about—and don’t get me wrong, I think there is absolutely, clearly going to be thousands of ways the defense budget could be better spent or optimized and so on—I think the extra consideration needs to be that basically American power, for better or for worse, does underpin the existing world order. That involves not just, that’s not all just Iraq wars or whatever, that’s literally protecting ships which are delivering containers going around the Persian Gulf or going through the straits of Malacca, and just sort of maintaining, basically, the base level preconditions that we are now used to, that have basically enabled our entire sort of post World War II lifestyle and affluence as a society. So, I think you need to be very careful. It pains me to say this, as someone who does listen to a lot of punk music and [laughs] is on the left, but I think just saying we should get rid of all defense spending, or we should do something dramatic there, you do need to think about the wider implications. It’s a bit in the same way that Brexiteers in Britain think we can leave the EU and everything will be fine, forgetting all of the sort of boring, foundational stuff the EU provides to the British economy. In the same way, America is almost providing that sort of foundational layer to the existing world order. Maybe there’s a better world order, maybe we can change things so that the world is organized in a different way, and maybe that could be a good thing. But ultimately, it is at the moment underwritten by American power. So if that goes away, we need to actually consider what changes or what replaces that.
Chris: OK, what about artificial intelligence? I heard a woman on the BBC refer to the US AI giants as the “G-mafia,” standing for Google, Microsoft, Apple, Facebook, IBM and Amazon, and how they are growing both economically, but also they have intelligence power greater than most nation-states.
James: It’s really funny, I’m sort of quite pleased that these sort of, the power of the big tech companies has come into focus over the last few years, because this is something I’ve been banging on about for years. Like, I was writing about this in 2012, 2013, and I wasn’t prominent enough to have people scold me and write it off or whatever, I was just writing blogs going out there that were being read by very few people. I was basically, again at the risk of self-aggrandizing, I was making sort of a similar point to what everywhere is almost a normal part of the conversation now about how these companies need greater regulation, we need to bigger conversations about what their power means because they are different to a lot of other companies. Actually it’s Paul Mason, the left-wing journalist, who wrote a book, Post Capitalism, which made this point. If you look at all of the tech companies, like Google, like Amazon or whoever, their business models, their products, their technology, it tends towards a situation of natural monopoly in that Google, you can’t beat Google because Google has the amount of data Google has, and every search you do in Google makes Google better. In the same way, Amazon can’t be beaten because everything Amazon does makes Amazon better, and this is all powered by AI, because we’re all training the AI of these companies. So they’re sort of entrenched in the system now, they’re almost too big to fail, like the banks were.
Chris: Richard Stallman likes to say that with Facebook, you’re not a “user” you’re a “use-d”…
James: I think that’s broadly true. But what should we do about them? And there isn’t a really easy answer, because—I mentioned Elizabeth Warren’s proposals earlier to sort of break up the tech giants, but even then, that’s not a particularly satisfying answer, because ultimately…so say you forced Facebook to, you tried to break it up into Instagram and WhatsApp are separate companies again. Which sounds like a sort of do-able thing, in the abstract, because they used to be separate companies, surely they can be separate companies again. But then you look at how Facebook have integrated the two companies, and Facebook, Instagram messaging, and Facebook Messenger and WhatsApp, they’re all running off the same back end, the same messaging service is now powering all three, it’s just like three different logos because that is the sort of legacy brands that were in use. So, how would you separate out that, considering it’s basically copy and paste? I hope not the same files running these different systems and the same backends, and even if you did sort of save, segregate out Instagram from Facebook or something like that, you’ve then got the problem of, well, what’s to stop Facebook, which still has 2 billion users, just creating a new Instagram knockoff and stealing all of Instagram’s users, like we saw with—say, Facebook would basically manage to….well, not necessarily destroy, but neuter the threat of Snapchat. There was a while where it looked like Snapchat could claim some really big market share against Facebook, but then Facebook basically ripped off all of Snapchat’s best features, put them into Instagram, and now Instagram’s store is, and why Instagram filters, do the job of Snapchat and Snapchat’s user base have dropped off completely. So even without buying the company, they’ve beaten it. I don’t know what the easy way to sort of stop them, or to regulate or neuter them is…
Chris: We had the anti-trust lawsuits against Microsoft, and Microsoft was put under a ton of restrictions and moved into the AI business and now they’re one of the giants there.
James: This is the thing, and Microsoft are like the biggest company in the world, depending on the day of the week, depending on the market cap, but nobody really cares because they’re not at the forefront of people’s minds any more, because they’re just doing boring, boring business stuff, and doing enterprise software, ultimately, and then have an XBox on the side for some reason—nobody really worries about them, yet often they are the largest company in the world.
Chris: And like in the old days, they’re working very closely with IBM again—largely, Microsoft has a compiler for the IBM quantum computer.
Francis: I think a lot of what’s going wrong on this planet right now, especially with regards to global warming and that sort of thing, is just simply greed. And you know we all, I think, recognize that after a certain point, greed is a bad thing, and I was wondering if you had any thoughts about how to deal with greed in our society?
James: I think a lot of the problem is in terms of—I agree, in the abstract, we should regulate Facebook, Amazon, Google and Apple more. But then, how you actually regulate it and how you would go about writing a rule to manage it, is a much more difficult question. Because even those four companies, if you try to think about, well, what rule can we…say you want to write a rule that no company should have a big market share in search, or something like that, then that punishes Google but not the others. Or if you say, even the big four companies, right? They’ve got very different business models. We all think if you say “big tech” that has a colloquial meaning, everyone knows the companies we’re talking about when we use the words “big tech,” but they’re very different companies. Where are they actually competing? Because they’re all competing with each other, but in such different ways and to such different ends. If I was a government regulator trying to draw a line and saying, right, this is the legal line in the sand, we’ll restrict your behavior in this way—I would find that almost impossibly complicated. And like the really good example is…so sorry if I keep going back to Elizabeth Warren, but she’s talking about detailed proposals. One of the things she said is that one rule she would draw is that companies shouldn’t be able to, if they operate a market, they shouldn’t be able to sell their own products on that market. So Amazon can have a market of sellers that they’re taking to people being the market place, then they can’t have Amazon basics, which are like, you know, their own brand. Various household things they just sell on with an Amazon label. Or, you can’t have Amazon publishing its own books through Kindle on the Kindle store or something like that. And that sounds like a good idea, but then you think, what about the Apple app store? Because Apple both operate a market there, and then also control what goes on the market through the app store, and it rules and so on, an often writes rules to rule out, you know, you can’t have an app that competes with Apple in certain categories. You can only have certain types of apps on the app store. But ultimately, there’s also a good argument for why they should be allowed to do that, which is that it enhances the security of our phones and our devices by having Apple as the sole monopolizer of that market, being able to kick apps off of the app store or not allow apps onto the app store at their discretion. That makes our phones more secure and less hackable.
Chris: And if you want more access to different apps and things like that, you can always switch over to Android and become a full-time systems integrator…
James: Even they’ve got the same sort of problem in that, you know, Google have the Play Store and they monopolize that, and I know there are third-party stores on Android, but for the 99.9% of users, they’re never going to do anything even vaguely complex to sort of get around that.
Chris: Google seems to read absolutely everything that flows through its system, which includes gmail and things like that; privacy is disappearing, what do you have to say about the right to privacy?
James: It is important. I mean, of the big tech companies my favorite is—that’s a weird way of talking about them—but it is Apple, because Apple do have a big focus on privacy and seeing it run through all of their products, and I think it’s a really clever way they’ve positioned their business, is in terms of privacy. So the really good example of this is, they recently announced that in the future, you know when you see, when you go on an app, like sign in with Google or sign in with Facebook? Soon you’ll be able to sign in with Apple, and what this will do, it creates a disposable email address, so whatever service you’re signing up for or signing in to, it won’t actually get your real email address, so you’ll be able to just—if they start spamming you, you’ll be able to cut them off really easily with stuff like that. And again, Apple—they encrypt everything on your phone, they won’t give the data to the FBI, and everything else, and ultimately that more secure experience is a good thing because it creates more trust for the user. If we’re going to have these devices in our pockets with all these sensors on, we want to be able to trust what’s going on there.
Chris: It was Apple’s end user license agreement that got you onto the Stephen Colbert show.
James: [laughs] It was. I can tell you that story, if you want. When Donald Trump signed the “nuclear agreement” with Kim Jong Un, and it was like a 2-page of aid for with basically no commitments or anything else in it, when they had the first summit. I tweeted out the fact that it was a looser agreement than the Apple iTunes terms of service, because if you look at the iTunes terms of service, buried on page 10-billionth or whatever that no one would ever read, it literally says this software cannot be used for nuclear weapons, which is more than the “nuclear agreement” with Kim Jong Un, and yeah, that was picked up by The Late Show. So that was a very strange morning, waking up and seeing the video of Stephen Colbert reading my tweet, and that was very cool. Can I just go back to privacy? Cause I’ve got one other thing I want to say on that—so the flip side of me evangelizing about how Apple are really good at privacy and how I think that’s a really good thing, is that it’s almost too easy for Apple, in that their business model—it’s almost like a free hit. Like, they don’t need to worry about the negative ramifications of taking that stance, whereas if you look at a company like Google, which exists on advertising, that obviously needs to read our data to target advertisements, and you’ve got Facebook, and the same for Amazon. And the flip side, [I’m stealing this] opinion from a blog called Stratechery which is all about the tech industry and different, takes more of an abstract approach to it, but basically Facebook came out and said, not so long after this, “well, yeah, but we’re not putting our servers in any country that aren’t a democracy.” Basically this is a riposte to Apple, because Apple have servers in China and let the Chinese government access iCloud because they have to in order to operate in that country. But again Facebook, they can say that, because obviously Facebook is never going to be unbanned in China. It’s almost like a freebie, it doesn’t harm their business model to take these stances, whether it’s pro-privacy or anti-China, see what I mean? But that gives each of them more power their own privacy, and I think any steps to enhance it are good…
Francis: Maybe the purpose of government should be that, in capitalism in business, the bottom line is profit. It’s kind of a survival-of-the-fittest world that it inhabits. But then, when there’s things that are necessary for the common good, then let’s maximize the benefit to the most amount of people, and that’s the realm of government. So, why can’t we either regulate business to behave themselves, or not get to big and allow diversity maybe to take the place of government? Why don’t we re-imagine the usefulness of government being in charge of things that are, say, like banking, healthcare, energy, things that are necessary for everyone.
James: Ooh. Yeah, again, I can’t claim to have any specifically complex thoughts about this, but I think it’s about designing laws and institutions in such a way that it mitigates that, again, because greed, I think, for better or for worse, is a part of human nature. That isn’t to say that we should embrace it—we should try and control it, that’s why we have higher taxes for the rich and so on, because if people are going to be naturally greedy, then we should design our institutions to try and re-balance things. It’s like, if you ask a relative of a murder victim whether you think the murderer should receive the death penalty or not, they’re probably gonna be thinking, yeah, the murderer should get the death penalty. But the reason we have the institution of the courts and impartial justice, is partially to a) insure the person is actually guilty rather than is just a gut reaction, and b) to make sure it isn’t just an eye for an eye, heat of the moment, taking revenge type thing, and there’s actually a different aspect to it. So when it comes to dealing with greed, if greed is a part of human nature, which evidence suggests it certainly is, then we need to design our institutions to mitigate it.
Francis: Well, when Richard Stallman was on our show, he came up with an idea that I thought was pretty brilliant, which was to have a progressive tax on corporations so that they couldn’t get to—beyond a certain level, it would be pointless. And I just loved the simplicity of that.
James: That idea is definitely emotionally appealing to me. I’d have to think a bit hard about that, as to what sort of cases that would be. That just sounds, again, emotionally appealing.
Chris: OK James, as we ask everybody, is there anything you’d like to promote or plug or—this conversation, we could probably continue for the next four hours…
James: Ha ha—so what would I to promote or plug—you can follow me on Twitter, I’m @Psythor, that’s where you’ll find links to all of my content, my terrible opinions, my retweets of people subtweeting Jeremy Corbyn, and any extra followers are always useful for sort of increasing my—social credits in the journalism world. So that’s probably the best thing, my website is JamesOMalley.co.uk if you want to look at my CV. Why not commission me to write for you? That’s about everything I’d like to plug I think—actually, I’ll tell you, I’ve got one more thing, I’ve just thought about it, sorry, I should pretend I didn’t just think of that and it was planned all along—if you enjoy podcasts, which I’m guessing if you’re listening to this you do, you should check out a podcast called “Science Fiction Double Feature,” available on all good podcast stores. It’s made by my partner, Liz Lutgendorff. And what it is, it’s a science fiction podcast where she’ll interview a science fiction author, and then she’ll interview an expert spinning off one of the themes in the book, and it’s really fun, really informative, you should go listen to that as well.
Chris: Great! Well, with that, thanks so much for coming on.
James: It’s been fund, thank you.
Francis: Yes, thank you very much.
—end.
Making Better Episode 11 James O’Malley mentioned this article on makingbetterpod.com.