Making Better—Jim Fruchterman
(Music) Welcome to the Making Better podcast, interviewing some of the world’s finest thinkers about a more optimistic future. Now, here are your hosts, Chris Hofstader and Dr. Francis DiDonato.
Chris: Well, Francis, we’re up to episode 6 of Making Better!
Francis: Yes, and this is a really great episode.
Chris: This episode is Jim Fruchterman. He’s McArthur Genius Fellow, he’s one of the people who invented modern machine recognition based optical character recognition (OCR), he’s a social entrepreneur and a leader in the social entrepreneurship field, and while I respect an awful lot of people in the world, Jim is one of the very few whom I truly admire. This episode has a few audio glitches in it, we use a program called Zoom to record, and we had a few internet hiccups, but we hope you enjoy the episode.
Francis: (hiccups) That was a hiccup.
Chris: Jim Fruchterman, welcome to Making Better!
Jim: Thanks a lot Chris, glad to be here.
Chris: So you have a long and varied career doing all kinds of things, but always with a social conscience aspect to it. So if you could just give us a bit about your background, that’d be a great way to get started.
Jim: Well sure. Well, basically I’m a nerd. You know, I started doing computer programming in the early 70s, I went to Cal Tech, which is kind of nerd mecca, and so I was always interested in technology and science and figuring things out—and it was never quite clear what that career was going to be, but I thought I’d either be an astronaut or a professor. So that was kind of the track that I was on in college and when I started grad school. The connection I had to solving social problems was in college, I was in a class, a modern optics class, and we were learning how to make optical pattern recognition things. And because it was the 70s, and pretty much all the jobs were in the military-industrial complex, our professor was using the example of how you could essentially get a smart missile with a camera in its nose, and have a computer that had a representation of the target—could be a tank, or a bridge—and the idea is that you’d fire your missile, it would look around in the world until it spotted its target, lock on, zoom in and blow it up. So I had to do a project for this class, and I was going back after the lecture going, ‘I wonder if there’s a more socially beneficial application of this.” And then I got my one good idea in college, which was, hey, maybe you could make a reading machine for the blind. Maybe instead of recognizing tanks, you could recognize letters and words and speak them aloud. So, the next day I went back to my professor with a lot of enthusiasm, and he explained that someone actually had used this kind of technology to do pattern recognition on words; matter of fact it was I think the National Security Agency was using it to sort through Soviet faxes that they had intercepted. And they were having too many faxes, so if they could spot the word, like “nuclear weapon” in Russian, they would actually route that to a human..a human analyst to actually review. And I said, oh, great, so it’s already been built—how much does it cost? He said, uh, I think it’s millions of dollars per installation…which took a little of the air out of my, you know, reading machine for the blind tires. But it lay the groundwork for some of the future things that happened. So after finishing my masters at CalTech, I went to Stanford to start a PhD, and—Stanford is in the middle of Silicon Valley, and this was a pretty exciting time in Silicon Valley’s history, and so I and a couple of other engineering grad students started an entrepreneurship talk series in our dorm. And our first speaker had started a PC company named for our dorm, and the second speaker was the president of a private rocket company. And, I’d always wanted to be an astronaut, I’d even gotten an interview with, with the people at NASA in Texas, and so I said, ah great! So I took a leave of absence from my PhD program, joined the rocket project as their chief electrical engineer, built all their electronic systems, and the rocket actually blew up on the launch pad. So that was a bit of a disappointment…and I went back to Silicon Valley with my boss from the rocket project, and we tried to start our own rocket company, we tried to raise $200 million. No one gave us $200 million, and then my boss—now partner—said, hey, I know this guy who’s a chip designer at HP and he wants to start a company to design a custom chip that will something really cool. And I said, what’s he got in mind? He said, I don’t know, let’s go have dinner with him. So we went and had dinner with this guy, and he described how he wanted to make a chip that could take in light and recognize letters and words. ..Wow..that’s like my good idea from college, you could help blind people read with that. And so that became the start of a company that was originally called Palantir and then changed its name to Calera, and made essentially the first omni-font character recognition technology that worked without being trained. And as we started the company, we became more aware that Ray Kurtzweil had invented a OCR system and a reading machine before we had, and we, you know, raised a bunch of money from Silicon Valley venture capitalists to compete with their character recognition product. Long story short, it was one of the early machine learning companies in Silicon Valley, our particular breakthrough was, we took millions of examples of characters and trained an algorithm in how to recognize those characters, and it worked really well. Company built up, sold a lot of products to insurance and law firms and the government and, you know, those were sort of our main commercial markets, but the dream of making a reading machine for the blind was still there, and I still didn’t know any blind people, but I just imagined they could use this. And so we built a secret prototype, based on our commercial character recognition product, that was connected over a serial cable to a PC that had a first-generation *tracks voice synthesizer in it. And we demonstrated this to our board of directors, and it worked, it scanned the page and it read it aloud, and my board was, you know, excited, the product demo, you know, there’s a new potential product, and they said, Jim, you’re the VP of marketing, how big is the market for reading machines for the blind? And I said, well, we think Kurtzweil is selling about a million dollars a year, now that they’ve been acquired by Xerox….a relatively awkward pause occurred in the board meeting, and they said, well, but we’ve invested $25 million in this company, what’s the connection between a million dollar market and that? And I said, oh, it would be great PR, the employees are really excited bout it, our customers will be proud of us…and they’re like, no, you know, you’re only $15 million and year and you’re supposed to be $30 million a year in revenue by now. You’re missing plan, you’re not making money, we’re not going to allow you to distract the company to launch a new product to help blind people, because it doesn’t make enough money. And they were, you know, right from a business standpoint, wrong from a social and moral standpoint—so that’s kind of what caused to…to launch out of, sort of, the traditional Silicon Valley tech world and into the assistive technology and nonprofit world.
Chris: And that’s when you founded Arkenstone.
Jim: That’s right. So, so I went to, after the board vetoed it, I went to…the board vetoed the project because they didn’t want to distract the company. [*] said well, you could start your own nonprofit. And I said, what do you mean, nonprofit? He said, well, you don’t think there’s any money in this…I said, no…he said, I can give you pro bono help to start a charity, and you’d be essentially a tech nonprofit. I kind of giggled, ‘cause, as you know, I said well gee, I you know, i’ve been associated with an accidentally non-profit tech company, you know…gee, maybe if you’re a non-profit tech company you’re like…successful by definition if you lose money! (laughs) And so, that was the start of Arkenstone, and the idea was, because the market was so small, if we could make a break-even, you know, half a million, million dollar a year venture, it would be a big success. And Arkenstone became the only high-tech company I’ve ever been associated with that actually beat its plan. I think within three years we were $5 million a year, and making reading machines for the blind as an enterprise, and breaking even—and that’s how Arkenstone actually went into the reading machine for the blind business and my old company was perfectly happy for me to do it, just as long as I did it outside of the company as a customer. They gave me a really big discount, they gave me extended credit, but—as long as I wasn’t distracting the team from making money, they didn’t have an objection for me doing it. And basically, I got a pretty sweet deal in exchange for a noncompete and no-hire agreement from my old company.
Chris: And that would go on to become what’s now Benetech, a nonprofit with a much broader set of goals and agenda. Why don’t you tell us about Benetech…
Jim: So Arkenstone got started in 1989, got to $5 million a year, and as time went on, you know, we would keep cutting prices, more people would be able to afford the product, our revenue stayed about the same. We were always break-even. We created a new product, we created a talking GPS for the blind called Strider, but it didn’t make enough money and we were short of money, and Mike May, who was then our VP of Sales, was kind of our core user of that product, he ended up spinning out of Benetech and starting Sendero Group to make talking GPS. But I was basically struggling with the fact that running a break-even social venture meant that I had no extra money, and the fact that we kind of had to shut down Strider or spin it off was basically an indicator that break-even was great as far as it went. So after about ten years, I got…I was kind of getting bored, I had all these ideas for other things that we could do to help blind people, to do stuff [for human rights], then the guy who started what became Freedom Scientific, Dick Chandler, came over and said, hey, I want to buy Arkenstone from you. And well, it doesn’t belong to me, it’s a charity, go away! So, he came back a couple of months later and he said, Jim, why don’t you tell me what your aspirations are? Hmm. This turns out to be a negotiating ploy, but as a nerd I didn’t really recognize it as such, and I said, well, I have all these dreams of doing, you know, other things for blind people, I want to do human rights,..and he said, tell you what, I’ll give you $5 million to your nonprofit, buy the assets of Arkenstone and merge it into Henter-Joyce with its JAWS product and Blaize Engineering with their Braille and Speak product, and you know, we’ll create this new company, and you can stay in the nonprofit with your engineering team, rent the engineering team back to us for a year, and then go off and start new projects. So that’s…but they also bought the Arkenstone name, so we had to change the name of the nonprofit from Arkenstone to Benetech. And we did our year of work on the next version of all the products that we had sold to Freedom Scientific, and then we had the ability to go off and look at a whole bunch of new projects. And so what we did is, we had about $5 million from Freedom Scientific, which could not go into my pocket—that’s illegal, it’s a charity—we raised another between $4 and $5 million from big silicon valley donors, especially Skoll and Omidyar are the two key people behind the creation of eBay. We had $9 million, we looked a hundred ideas, we invested in 20 ideas, and four of them became nonprofit social enterprise products that went on to change their field. So the one that is really well known in the blindless field is Bookshare, but we also started a first big data group human rights movement, we started the first software for capturing human rights data so that the information wouldn’t be lost, we created an environmental project management package, and plus there are a lot of other projects that we tried that didn’t take off, which is pretty much the Silicon Valley way, it’s just that in every case we’re not looking to make money—‘cause we’re nonprofit—it’s how can we help the most people while breaking even. And that formula turns out to, not only, you know, worked after we sold to Freedom Scientific, but it’s continued to work to this day, and we’ll always have lots of cool tech for good projects in our hopper, and are busy trying to figure out how to, which ones will take off, and then scale them up and make an impact.
Chris: And your business plan for Benetech received the Charles Schwab recognition as the best business plan for a nonprofit that year…
Jim: We got a lot of recognition. It was Klaus Schwab, who was the founder of the World Economic Forum, the Davos people, who gave us the Social Entrepreneur award and our Bookshare business plan won, I think, runner-up in the Yale business plan competition, which was the first social [*] business plan competition…and then we won the Skoll award. Even though we were pretty early on, once we made that transition to Benetech, we started to get a lot of attention, because the social entrepreneurship field—this idea of using innovation and entrepreneurship to help solve social problems—really started taking off in the early 2000s. Because we’d been doing it for a dozen years, we were seen as one of the founders of that movement.
Francis: Why is it that you won these awards, what specifically did you do best?
Jim: I think the unusual thing about us was, we bucked the Silicon Valley greed-profit-seeking motivation, that often leads Silicon Valley to do some kind of nasty things. We said, look, no, we’re setting up to be in the public good. So I think the reason that people got excited and recognized us was the idea of an exciting, Silicon Valley startup company that had chosen to be a charity, to be a nonprofit, and to focus on doing social good, kind of flew in the face of how people regarded Silicon Valley, which was make money at all costs, and kill yourself along the way. I think that was one reason, I think the other reason was that the things we we were doing were really understandable. Many tech companies have great products that the average human being cannot understand why this middleware company exists. We were helping blind people read, you know, we were the Napster of books, when it came to Bookshare. We were helping document human rights and helping convict genocidal generals of genocide. I mean, this I think captured more people’s imagination that technology could be used deliberately for good rather than occasionally evil by accident, which is certainly the story of big parts of the tech industry today.
Chris: I like the phrase, the “G-mafia,” it stands for Google, Microsoft, Apple, Facebook, IBM and Amazon as the evils of, ah…artificial intelligence these days.
Jim: Basically, we were using what was considered at the time “artificial intelligence” to actually help blind people read. And AI has that potential to do good, it’s just right now I think we’re at a point where a lot of people are applying machine learning/AI in very sloppy ways…and hurting a lot of people, because they’re just ignoring many of the things we know about things like statistics. Anyway, we can come back to that, but I’m spending a lot more time kind of helping, not only the disability community, but other minority communities understand some of the threat that AI, badly applied, actually poses to their interests.
Francis: One of the things that I found really troubling during, like, the 80s and 90s even, was the religion of the free market, where like the free market could solve every problem. One of the things that I’ve seen, especially in research, is that when there isn’t a lot of money to be gained, it’s hard to get funding and it’s hard to get things up and running a lot of the time. Say, for example, with rare diseases, that kind of thing, you know we have this situation now where nonprofits kind of pick up the slack, but it seems to me that there’s like an inefficiency to it all because, for example, my girlfriend has a nonprofit, and she spends half her time or more raising funds. Is this model, in your view, working? Is there…other ways to approach it on a larger scale? Is government maybe supposed to play more of a role?
Jim: Well, the short answer is yes. People often try to say, Jim, why aren’t you a for-profit? Can you be making a lot more money as a for-profit? And the answer is…yeah, but we want to work on social problems. And many social problems are directly connected to a market failure. The reason that Arkenstone, you know, the original name of Benetech, got started was because our investors said, “no, that doesn’t make enough money, don’t do it.” And so, now, the free market religion usually goes the extra step of saying, if it doesn’t make a lot of money, it’s a bad idea. And that’s the idea that I reject and a lot of other people reject, which is, wait a minute, if you think that, then you’re going to consign 95% [men who need] to never getting the benefits of most of this cool technology we’ve created. And the great thing about technology is, the marginal cost of a new piece of software, a new chunk of content, is next to nothing. So as long as you can see your way clear to actually working on this thing that’s not an exciting market, you can do an amazing amount of good for almost no money. Bookshare is an example of, we promise any student in the US that needs a book, we’d already have it, we’ll go get it and add it to our library. And now the library has 700,000 books and about 700,000 users—that runs for $10 million a year. On less than another $10 million a year, we can solve the problem for the whole darn planet, which by the way is a fraction of what the planet spends on library services for people with disabilities. Its leverage is terrific. The other part of your question, which is about what model is there? So, there’s basically two models; one model is, you encourage the creation of nonprofit social enterprises, like Arkenstone or Benetech, and now there are several hundred of them. So, we were alone back in those days, and the other people who were starting similar things at the same time, we didn’t know about each other, I didn’t know that this thing existed for the first ten years. And so now we know that we’re a field, the people who create the technology—the companies, the academics, the authors, the publishers, whatever it might be—they’re often quite generous with access to their intellectual property to help more of humanity. So we’re actually able to get our hands on this stuff. So I think that that is a good model. The nonprofit sector is never going to be as scalable, as efficient, as something that actually makes money. So if you can do social good and make money, I encourage people to do that. But I think your last point is, what about regulation—I think it is possible to take some of the worst things about industry and change some of those things by legislation. And the one that a lot of people with disabilities are familiar with, it’s actually against the law to discriminate against people with disabilities. Now, we all know that doesn’t stop it from happening, but on the average, it makes it harder, and as people lose more and more lawsuits, they do more and more to avoid getting, you know, caught in a lawsuit, we actually move the ball forward. So I think that both of those models are important, and I think that it’s clear that we have a need for more nonprofit social enterprises, and we also have a need for more government regulation to remedy some of the excesses of the market, some of the negative social consequences that come from—you know, whether it’s over pollution and climate change, or it’s discrimination through AI—these are all things that need some attention, in my opinion.
Chris: Can you speak more to some of the human rights and the non-disability related things that you work on at Benetech?
Jim: Oh sure. I think the great thing about the sort of transition at Benetech was, it gave us some money to respond to some of these needs. And so the story of Benetech has been, starting from base in technology serving people with disabilities (which by the way is still more than 2/3rds of what Benetech does), but we’ve been able to do quite a number of projects for other parts, other social issues. So one of the big questions we had is, how can we help prevent atrocities in the developing world, on human rights violations? And you know, we thought about it a lot and the best thing we could come up with was, what if you could capture and not lose the testimony of, you know, people who survived human rights abuses, that witnessed them, and so t hat started a very long sequence of work at Benetech—it’s been going on now for over 15 years—of supporting the human rights movement. Because frankly, if you think about the human rights movement, the only thing it has is information. I mean, activists and information are their only assets. And so, we found out that the majority of these stories, these truths, were getting lost—groups were going out of business, they were getting…their offices were burned, their computers were stolen—so the idea is, hey, let’s capture those stories, let’s back them up into the cloud, so that they’re not lost, and then if we get a big pile of data, then we essentially had a big data group that would actually analyze these patterns and so testify in genocide trials, identify patterns of basically who did what to whom, and be part of the support for the outnumbered human rights activists. And so we got involved in a lot of the large-scale human rights violations, you know, civil wars and conflicts, we helped really understand what the numbers are and political science, that was kind of unusual to actually be asserting things based on data rather than opinion. And we had lots and lots of data in a lot of these civil wars and conflicts, from many , many different groups. And so we were able to do this, we moved into the LGBT community in Africa, after they were under a lot of threat of capital punishment for being gay, and certain African countries was being floated as a law, helped groups write the first police violence against gay people and their country kind of reports, and leading to change, and often we’re helping the UN, so…so for example right now, our biggest project in this area is that there’s between five and ten million videos that possibly include information about atrocities in the Syrian civil war, and we’re writing machine learning AI algorithms to help basically go from $5 or $10 million videos that you might want to look at, which no person can actually do, down to maybe the 500 or 1,000 videos that might be relevant to preparing a case against people who launch chemical munitions. We’re not a human rights group, we’re the nerds and the scientists who help make the human rights movement more powerful. That’s one example, another example is the environmental field came to us, about a dozen years ago, and said to us the state of the art for project management in the environmental field at the time was an excel spreadsheet, and that—you know, it’s another case of the market failure that we talked about, is construction had fifty different project management packages, depending on what you were constructing. But people who were running, you know, wetlands restorations, or campaigns against environmentally bad practices, were stuck with an Excel spreadsheet. And a general tool like Microsoft project was way too complex for your average biologist or activist, so we wrote something called [morati]. We jokingly called it TurboTax for the environmental activist professional—the idea is that it would ask you a set of questions, kind of a “wizard,” and come with an explanation of how a dollar in, like more salmon or cleaner air or whatever it might be—and so that project has gone on to be, you know, the leading project management package in the environmental movement. The list kind of goes on, I mean, we’re doing a ton of stuff in assistive technology, we started the Diagram Center, which is all about how to make STEM and STEAM; and the whole idea of Diagram was, why doesn’t everyone in the field get together, build shared technology and shared standards as a common effort, given that we all care about making science and technology, engineering and math, and arts content more accessible. And so that’s a great example, and then of course some, the one that I think is very exciting is the woman who took over as CEO of Benetech from me in…late last year, Betsy Bowman, who’s been at Benetech for 10 years, she helped get the “Born Accessible” campaign launched, and the goal of “Born Accessible” is to eventually put Bookshare out of business. The idea is that if we can convince the publishers to create their mainstream e-books completely accessibly, then blind and other people with disabilities related to print can just get the standard e-book and it should work great. And that’s increasingly the case, and we’re hoping to do that for more and more complicated works so that, eventually, the need for something like Bookshare will peak and people will start relying on mainstream e-books to be able to read what they need to read.
Chris: And what do you see for the future, what ideas are out there that you haven’t started on that you would really be enthusiastic about doing next?
Jim: So I’m actually having a blast with actually not being the CEO of Benetech, and the great thing is, I think Benetech is going to continue to expand and have ever-greater impact under the new leadership, and frankly it was probably, after about 30 years, probably time for someone to take over Benetech. And the roadmap that Benetech has is pretty clear—you know, we worked closely with the World Blind Union, and other blindness organizations, especially the NFB here in the United States, to get the Marakesh treaty passed globally, to get the US to ratify it, Europe has ratified it…I think the goal is, is that even as we work on the Born Accessible movement here in the US, to reduce the need for Bookshare, I think that there are millions of people around the world for whom access to books is, you know, ten or twenty years behind where we are in the US. And so I think the Marakesh treaty is going to let us, over time, become the national library, be where Bookshare was 15 years ago in the US. I think right now Bookshare is the national library, free national library in easily a dozen countries already. So I think that roadmap is kind of set, obviously Benetech is going to go off and do more and more stuff in human rights. Benetech’s also started doing stuff in health and human services, they have a project called Servicenet to make information about health and human services a lot more available to people who need them, because that field is stuck kind of in the “yellow pages” era of information management. So Benetech is off and running those things. I’ve launched a new social enterprise called Tech Matters, as of January. It’s actually physically sponsored by Benetech, so we still have a connection, but the difference is that if I don’t raise money for Tech Matters, then I don’t get paid, so Benetech’s not on the hook for paying my salary. And now I’m working on a whole fresh set of social problems that Benetech hasn’t had the bandwidth to work on. So I”m working with the global movement of child helplines, so these are the people who, you know, in many countries take the phone call from a kid in crisis or someone who sees a kid being abused, and helping them update their technology platform to do a better job of helping potentially a hundred million kids around the world in the next few years. I’m working on fighting slavery in the supply chain, basically unethical labor practices, I have a next-generation environmental project that’s going to help essentially regions figure out what to do about climate change and the environment and matching conservation up with livelihoods and agriculture and all that sort of stuff. So the common thread to everything I get to do today is, someone has a social problem that they want to solve, they’ve got a group of nonprofits or government agencies, or for-profits, that want to work together on solving that, and I get to be their nerd. I get to help imagine what technology products or standards or glue might help unlock the potential of these people to help solve this big social problem. And that’s, frankly, it’s a blast. I’m, you know, fresh challenges, lots of people who are very dedicated to the community that they serve, and I get to make their tools or help see that they have the best possible tools for the job.
Francis: I think what you’re doing speaks to an enormous need in society today, where there are all these technological solutions, and maybe potential for creativity with what we have, and it’s not really being discussed even as a choice for society. I think that the role that you’re playing is one that we need on such a larger scale.
Jim: I’m glad to say that I’m part of the growing movement, because a lot of people see the same problem that you see with technology. And I see it most obvious in the universities, both among faculty and students. I think that computer science faculty—to pick a group—are very concerned about what they’ve helped create, which are basically, in many cases, technology companies that are, if not immoral, are certainly amoral, and often are kind of clueless about the negative social impacts that they have. Quite a number of universities have started programs that go by a bunch of different labels; one label is “public interest technology,” the idea that people might want to work on essentially using technology to help solve social problems and serving in the public interest rather than in the private interest. There’s people who work on “computer science plus” problems; so, how could computer science help ag, how could computer science help human rights, how can computer science help education. So there is a movement here, and I know that, for example Stanford just announced a major university wide program to try and engage their faculty, who are pretty high-powered, and their students, into actually working on social problems rather than on just relentlessly spinning off, you know, the next Silicon Valley unicorn company. So, I like the think this is going, and of course there is a, you know, very exciting stuff going on in the—especially in the last administration—that has continued, under the Trump administration, which is, you know reforming how the federal government uses technology to better serve people. Government agencies realize that they’ve done a really bad job of serving society, I think the Healthcare.gov fiasco of a few years ago was kind of, you know, one of the low points and I know that a lot of people are trying to make sure that technology actually works better for, let’s say, veterans or people who are on Social Security, and hopefully we will see more progress in those areas, which touch an awful lot of Americans.
Francis: Well if you want to talk about waste, what are we going to do about this defense budget, and the amount of resources…I mean, all that power that could be used for good. I know this is sort of one of those out of left field questions, but you can’t, you know, get away with saying, hey, I’m not a rocket scientist because actually at one point you were.
Jim: (chuckles) Obviously I got started by hearing about a military application of technology and thinking about a social application of that same technology. And the good news is, you know, Silicon Valley got its start almost exclusively in defense industry applications, and the story of Silicon Valley over the nearly 40 years that I’ve been here, has been a steady move away from being focused as much on military applications of technology to applications that help society, and we see this in the giant protests at companies like Google about their technology being used, say, to target people for assassination with drones. There’s an awful lot of people in the tech field who did not go into the tech field to build technology that did that sort of thing, and talent ultimately is one of the biggest factors in what goes on in tech companies. And so many tech companies are going to have to pay attention to how their technology is being used, and I think that we’re going through a period right now, you know, we’re highlighting not only how technology is being applied to military things, but also how the technology is being applied to, whether it’s enabled bullying, or thrown elections, or whatever it might be, I think people are beginning to grapple with some of these social impacts that got ignored during the go-go phase of the last, especially 20 years of the growth of the internet.
Francis: I love the idea of being a nerd. I mean, I consider that like a pretty high compliment in my world…you know, you think that maybe this country would want to try a nerd for President, I think what we have now is like as opposite a nerd as you could possibly be.
Chris: I think Michael Dukakis was that candidate, and we…he didn’t do very well.
Jim: Yeah, I know, my sister was saying “will you please run for president” and I’m like—naah.
Chris: C’mon Jim, everyone else is…why not?
Jim: I think that one of the biggest concerns expressed by nerds, especially nerd philosophers and nerd thinkers, is that the technology that we’ve created has undercut, kind of, respect for technology, for science, for fact. And they lay that at the door of essentially what are our social media, sort of world has created is that, by the way that a Facebook or YouTube makes money is for people to stay on their site longer, and they’ve learned that the way to get people to stay on their site longer is to feed t hem more and more outrageous things to cause them to get angry, or get sad, or—to basically appeal to their lowest emotions. The problem with things that are false is that they are more engaging. Essentially, Silicon Valley has created this giant engine to sort of stupidify the average person who uses their products, because it’s in their economic interest for you to get more and more false information because it’s more engaging. ‘Cause, you know, whether it’s clickbait or a false claim, those things get a lot more attention—i.e., more people spending more time on the site—than things that happen to be true. I think there are people who are very, very worried about this, and this might, you know, come back to some of this regulation that so many people in Silicon Valley object to regulation, but they’re like systematically destroying respect for science, respect for truth, respect for institutions…by creating a tool that relentlessly destroys those things in their economic interest.
Chris: There was a study published recently, I think it was out of a university in the UK, that showed that if you start YouTube with a brand-new account, completely fresh, you know, Google doesn’t know anything about you so it doesn’t know what to recommend, and you do your first search on the US House of Representatives, and then just watch each video that it recommends to follow next, and within eight to ten videos you’re going to be on something promoting the flat earth theory.
Jim: Yeah. Or Alex Jones and Infowars, or something else like that—yes. I was actually reading a book, actually entitled “Zucked” which is by an early advisor to Facebook’s Mark Zuckerberg, who actually says it’s like three or four things, but, we tend to end up there because the algorithms encourage it. And AI, really good at doing whatever task you set it to, and if the task is “have people spend more time on the website and click more and look at more ads,” that’s why you end up with stuff that’s false. But I’m on the techno-optimist side. I think that most tech people want to be creating things of value, and do not want to be associated with things that are evil by accident, or now, one might argue after it’s been pointed out enough, evil on purpose—and so I think that my goal is to keep putting the idea that it is possible to make a living doing technology for social good. It may not be the best path to becoming a billionaire, it’s a pretty bad path if becoming a billionaire is your objective. You know, there’s an awful lot of people who want to live a life that they can actually be proud of and work on things that they’re actually proud of, and as demonstration of that there’s a lot of great teachers and a lot of great people in many professions that help people in spite of the fact that they don’t make as much money. I want to get more of the tech field to channel itself into this, how can we do good on purpose? How can actually set out to maximize human utility—making people’s lives better? Because I think that is ultimately what drew a lot of us to being nerds.
Francis: We had a guest, Richard Stallman, on recently, who had this really great idea I thought, which is to have a progressive tax on corporations based on their size. Basically what that would do was, it would make it so, you know, at a certain point it just doesn’t make sense to get any bigger, and you know the idea that that would ultimately create a more diverse and robust economy.
Jim: And of course Richard, you know, founded the free software movement, which really influenced these more community values—and we are giant fans of free software, we’re also giant fans of open source software, which I know Richard’s not crazy about. But I’m not as much of an economist, I’m how we actually choose to solve this problem. I do believe that whether it’s income inequality or abuses of tech platforms, that we’re going to see more regulatory activity, we’re going to see more changes in tax, but ultimately it depends on the electorate deciding that they want those changes. And it’ll be interesting to see if we can get that consensus, because clearly we haven’t necessarily been moving in that direction lately.
Chris: As we just discussed, AI’s driving people to increasingly faulty and useless information, people are more and more likely to be misinformed.
Jim: Yeah, and sometimes it’s much more subtle than that. I did a major study last year for a major disability donor, and they asked me to look at what technology might help if the goal was to greatly increase the number of people with disabilities who had employment. Many of us on this call know about assistive technology and other ways that you could make people in employment more effective and more likely to be able to get a job or keep a job, because they have tools at hand to do it. The thing that blew my mind—and maybe it shouldn’t have—was essentially technologies taking over the recruiting and the hiring process in almost all large corporations and many small and medium corporations. And Artificial Intelligence, machine learning, has been applied to every single step in that process, and in many cases, the way machine learning has been applied is egregiously discriminatory against people with disabilities. Which, one would think, is against the law in this country, but that doesn’t stop people from buying this technology or applying it, because the people who sell the technology say, “our machine learning, it doesn’t see gender, it doesn’t see race, it doesn’t see disability” and yet the way they’ve implemented these things can’t help but discriminate. And I think that we’ve, you know, we’re part again of a movement of calling out these technologies and saying “how is it possible that that technology doesn’t illegally discriminate against people”…and I expect, actually, that we’re going to have to have disability rights attorneys suing companies over buying machine learning tools that discriminate against people with disabilities, and eventually people will have to actually correct this. But some of these companies are going to become, you know, very rich before anyone actually calls them to account for the fact that they built something that extensively discriminates against people with disabilities.
Chris: About a year ago, I wrote a blog article called “Can an AI be Racist?” and I based it entirely on what Apple suggests in my Favorites playlist every Tuesday, that comes out, if you use Apple Music…and every week they recommend 25 songs that you’d like to listen to, you know, from your own library and put it together as a Favorites mix. And literally for weeks and weeks on end, the top dozen were all white artists, and the bottom thirteen were all black artists. And Jimi Hendricks was always the borderline. So for some reason, Apple Music prioritizes white artists over black artists, and my record collection’s probably 75% minority.
Jim: Yeah, it’s really fascinating how that works out, isn’t it?
Chris: Yeah, but it surprised me after a few weeks in a row, when I started following it.
Jim: This is an issue that’s getting a lot of attention, and people are ..one of the things that people are trying to do, one of them is make the people who work on machine learning a much more diverse crowd, so that the kind of oversight that might lead to the kind of outcome you describe is less likely to happen if you have a more diverse group of people working on it, going “gee, this result seems very odd.” But in many cases we don’t have people who work on these tools that see these obvious problems. And of course, I think gender discrimination problem is the one that’s the most obvious, and people have the most awareness that it’s a problem. There’s a famous story about Amazon killing an automated resume screening tool because they could not keep it from discriminating against women. And if it’s that hard to stop something from discriminating against women, imagine how hard it is to stop it from discriminating against minorities or people with disabilities.
Francis: I think that’s really a fascinating line of thought, because it circles back a little bit to what we were talking about earlier, where in, you know, like a capitalist free market system, you know there’s going to be like certain things that just don’t get attention that really need attention. I wonder if there’s some kind of connection there.
Jim: The example that I use in this report is a company called Hire Vue (and they spell it, you know, like hiring people and View like v-u-e I think). What they do is create a screening tool, that they show you a video and you record yourself answering that video, and then a machine learning algorithm analyzes your facial movements and your voice tone and your word choice, and decides whether you are the one in five people who do this who get an interview with a human being. So they screen out 80% of all people. And I think we can all imagine many, many different kinds of disabilities that might get in the way of using this, from accessibility problems with the app as itself, actually pointing the camera at the right spot…and then the question is, well, how many people with disabilities were in the training set that they used to create this “scientifically validated” thing? And I’m guessing not a lot of blind people were in their training set. So, you know, it’s both the algorithm, what they’re collecting, has tremendous discriminatory capabilities. What if someone can’t speak, what if someone has a stroke and half their face doesn’t move, what if they’re from a culture that discourages obvious show of emotion…I mean, all these things that are discriminatory and then you have the training set, and how it was trained, and I’m guessing that it did not reflect a diverse population that included lots of minorities and people with disabilities. And yet, this tool is going, it’s out there and being used all over the place, and one of my favorite geeks, the guy who actually was like the head of our human rights program for almost ten years, he said “something that you all should watch out for is, when the customer for a machine learning tool and the people who build the machine learning tool…if neither of them suffer any consequences when the machine learning tool gets something wrong, you’ve got a case of moral hazard.” And this is classic example of, the company is saving money, that bought the tool, the company that sold the tool is making money, and the fact that they might egregiously discriminate against people with disabilities…who’s suffering? People with disabilities, not people who are in the middle of this transaction. This is the core of the problem that we’re having, essentially with the new generation of technology, is that the people who are engaging in the financial transactions are actually not the people who suffer the consequences of the decision, right? The users of Facebook, people posting on Facebook, they’re being commoditized and product-ized to get a free tool, but, you know, it’s Facebook and their advertisers that are making all the money. This problem just keeps resurfacing, that we’ve now moved to a market where the traditional “I am the seller, and I’m getting for you, and your the purchaser, and we’re the only dynamic”…Silicon Valley has, in many cases has moved to this dynamic where the person who actually uses the product is the product, and I think we’ve all heard that kind of claim, but it shows up in this sort of thing where the people who suffer the consequences of it going wrong aren’t actually making the decisions about how to build the product or actually how to pay for it.
Chris: And how do you see a path to disrupting that?
Jim: I mean, we’ve talked about the two paths that are there, which is, you know, starting nonprofit social enterprises that actually focus on doing good with the technology, and regulation to curb the most excessive abuses by the for-profit world. I’m not naive, it’s…it’s not going to be possible for nonprofit social enterprise to displace Facebook. I don’t take that as a very serious option. But I find that many of the technologies, or the things that we’ve come to understand from these technologies and these successful companies can be applied to deliberately doing social good. Obviously doing pattern recognition to help blind people read books was the one that started my career. You know, right now we’re trying to figure out how could you use machine learning to better prosecute war criminals, in the Syrian context. The project we’re working on around, sort of, large scale environmental stuff, how could we be using machine learning to better model erosion and water retention in regions that are going through land degradation and desertification. The same tools can be applied to these things, and I think that a lot of this is intent. We need to get more people intent on doing social good with technology, to create value that is not purely privatized, that actually keeps in mind the impact this has on society, and then we need regulation that actually makes it difficult for people to go out and just ruthlessly exploit people, which they actually have a great habit of doing.
Francis: Another things that I think is a big flaw in our society today, in regards to its relationship to new technology, is basically how the workday has gotten more and more intense when the actual amount of labor it takes to sustain a quality of life for the world has gone down. I was wondering if you could speak to that at all?
Jim: There are people that are working on this, and different groups have tackled different parts of it. So, there’s a guy who came out of the tech industry who started a movement called “Time well spent.” And the idea is that these tech tools have stolen a lot of our attention, and that has caused our human relationships to actually suffer, because these things are designed to be addictive, more addictive and overcoming many of our self-governing mechanisms, you know, why we should spend more time with our family, for example. And so they’ve influenced new features on the latest generation of iPhones and Android phones, actually spend more time with tools that help you keep from looking at email up until when you go to bed, creating more awareness of, gee I spent 40 hours last week playing this online game, maybe that’s actually not what I want to do. We also have some things going on in terms of social norming and regulation, and the Europeans are further along on this, whether it’s with much stricter privacy requirements and antitrust requirements, and actually fining companies like Google billions of dollars for violating those things. There’s also, I know there are European companies that turn off email in the evenings so that their employees can’t work on company email after a certain point, after six o’clock at night, or before eight o’clock in the morning, whatever it might be. One of my favorite books on this subject is from Tim O’Reilly, the guy who coined the term “opensource” and has been a big leader in the tech field for a long time, he wrong a book called WTF, which when he gave a talk on it in the Obama White House got sanitized to “What the Future.”
Chris: That’s funny, ‘cause President Obama was on Marc Maron’s podcast called WTF, and he expands it to be the largest philosophical question of our time, “What the Fuck?”
Jim: Yeah, exactly. Well anyway, it’s good to know that President Obama’s up for this in multiple dimensions, but Tim’s book—I mean, there’s a lot of exciting stuff there about where he sees the future going. But one of his biggest points is, we get to choose—you know, people often in the tech industry present this as an inevitable form of, you know, it must be this way. Data wants this. Business just works this way. And that’s not actually true. As a society, we can choose to prioritize privacy more, or prohibit some of the most abuses of our data, or whatever it might be. And so I think the question is putting, sort of, society back in charge of making some of these choices, either by informally what they choose to do and not do, but also what their legislators do in terms of regulating industry.
Chris: Changing gears then, what is it about technology you are most optimistic about, looking to long term future? Like, you know, where do you see us in 20, 30, 40, 50 years if your optimistic vision of technology happens?
Jim: I think that there’s some things that technology can do for us that will make lives better. So let’s say that we have some agreement on what a better life is. Or whether that’s just more autonomy to make choices about their life. So we could imagine technology helping solve the climate change issue, or taking some of the extreme impacts of climate change off. We can imagine technology and access to information being such that education becomes more effective, that the rights of women and minorities and people with disabilities have a greater level of respect. Obviously there’s tremendous stuff going on in the medical area. So if our goal is to reduce human suffering, to improve the quality of life for people, to give people more autonomy and more choices in how their lives unfold, that communities can make choices about how they want development or industrialization or conservation to be pursued in their communities—I see every single social issue that we face, there’s a lot of people working on that issue that want to make a dent in it, and I see technology as an indispensable tool in helping realize those visions of a better, more just, healthier, greener planet, whatever it might be. And so, you know, that’s what makes my job really so much fun. It’s if someone sits down with me and says here’s a social problem, and here’s the better world that we can imagine, it’s not hard for me to come up with five exciting technology ideas that might help contribute to that, a couple of which are bad ideas, a couple of which are probably great ideas—I don’t know which are which right now, but it’s not hard to figure that out, and that’s what I get to spend my time on, and I know that there’s an awful lot of people coming out of the tech industry who would like to be doing that kind of work, and I want to that just more normal, more sustainable, more of a career choice that more people with tech skills can actually pursue.
Francis: How would you recommend to someone who is hearing this that wants to change careers now? What would be the first steps for someone like that?
Jim: You know, you’re most powerful when you bring skills to bear, and so if you are mid-career, and there’s a lot of things that people have learned in their career that might apply to doing social good, right? I mean, I think my background as a tech entrepreneur and a machine learning guy actually turned out to be pretty darned handy to a lot of things that I ended up doing, leading Benetech, and now Tech Matters. And so if you are early in your career, I mean I often advising people, saying, what are you really good at? Get better at it, get some experience…some people come out of school and go into the nonprofit sector. And I think that that is increasingly a career option, but I’m also aware that the way our economic system works, often people come out of school with so much debt they have to go, and go to a job that makes more money. But I think that, I see people coming to the kind of work that Benetech does, whether it’s fresh out of school, early career, mid-career, late career, final sort of phase in your career—people at every step along that way are actually saying, I want to move from money to meaning, is sort of one of my catchphrases. And I think that there are…if you have a skill that’s actually applicable to these kind of social good applications. And so there are, there’s a lot of meat out there. It just doesn’t happen to pay, you know, $500,000 a year.
Chris: But you can make a reasonable salary in the non-profit sector. Because of some research I’ve done recently, I know the salaries of an awful lot of people working in the non-profit sector in the blindness space, and they’re making a living wage.
Jim: Yeah. And yes, I mean, the CEO’s generally not making, you know, billions or hundreds of millions or tens of millions, or generally not millions, but we can get people who are working for big tech companies to come join us. They often take a significant pay cut, but you know we can still pay more than $100,000 a year to a software developer with a lot of experience, even if they can make more than $200,000 or $300,000 at some of these tech companies. I mean the average salary at Facebook is like $250,000 a year.
Chris: But that doesn’t include contractors, and they have a ton of contractors there.
Jim: No, they do have a pretty deft way of pushing those people off payroll. But you come fresh out of school from an elite school, and you get paid an awful lot money to go to these companies. But again, if you’re not about profit-maximization but working on something that you really care about, yes, you can make a decent living and…we have lots of people who work for Benetech who work around the country and take advantage of the flexibility of working from home. Many of our employees with disabilities are actually working from home rather than living in a very high-cost area like Silicon Valley that doesn’t have great transit. They can take a different place and do a great job, because frankly, given the kind of technology we have today, your online presence doesn’t look much different whether you’re in Chicago or New York or down the hall here in Palo Alto.
Francis: One of the things that I try to do in this show is create almost like a brainstorming kind of moment, at times where you’re maybe even like theorizing about how the future could be and how…either what technology you would predict, or importantly, what technology that isn’t being used right now that if implemented, could make huge changes for the better.
Jim: I tend to be more practical…even…
Chris: This gives you the chance to step into speculative fiction.
Jim: (laugh) Alright, this is my Atwood moment.
Francis: This is the show where, that we’re up for that.
Jim: The thing that really excites me about technology and the directions that we’re going with sensors, with better medical technology, with better data collection technology, with better machine learning technology, is that the idea that we would be able to understand social problems at a far more detailed level, is very exciting and a bit terrifying, right? So the challenge to us going ahead is, how could we use knowing everything we possibly might want to know about a social challenge, and then using the technology to do social good while still respecting the privacy and human rights of the people who involved. And so, I think that that is the essence of what I want to see going forward. I just love sitting down and saying, imagine we know everything, now what will we do? Because the way that we’re going, that’s actually a realistic assumption for tackling some of these problems, thanks to the incredible sensing infrastructure, data infrastructure, that we already have, that could be applied not so much to making money, but instead making life better on the planet.
Chris: Is there anything you’d like to promote or plug, whether it’s something you’re working on or something that other people are working on?
Jim: I think the thing that I want to “plug” is that people should get more involved in using technology explicitly to do social good. I think that it’s something that’s in many people’s hearts, and they feel like it’s like almost they don’t have permission to do that…I want to give people permission to go out and find a way to make a living while doing really great things through technology.
Chris: Excellent. And with that, thank you so much for coming on Making Better, Jim.
Jim: I’m glad to be part of it. Thanks.
—END
f
Chris Smart reposted this article on twitter.com.
Gonz Blinko reposted this article on twitter.com.
Chris Smart reposted this article on twitter.com.
Amanda J. Rush reposted this article on twitter.com.
Benetech mentioned this article on twitter.com.
Jim Fruchterman liked this article on twitter.com.