These Researchers Want the People to Seize the Means of AI Production


This story is over 5 years old.


These Researchers Want the People to Seize the Means of AI Production

Artificial intelligence isn't just for corporations.

Right now, inside your phone, is an AI model trained to understand human language. If you have an iPhone, you call it Siri. But I'd be willing to bet that you don't know much more about it, or how it really works, than just that.

You probably interact with some form of AI every day, even if you don't realize it. You sure as hell don't have a say in how it makes decisions about which taco places to recommend to you, or which articles you see online. That privilege belongs to the people who own the AI: Gigantic corporations like Google or Facebook. An increasing number of computer scientists see this kind of opaqueness as a potential danger as we rush headlong into a future full of black boxes working in the background of our digital lives, and the solution—as they see it—is to democratize access to AI.


To this end, a team of computer scientists at the University of Pennsylvania led by Jason Moore are working on an open source interface called PennAI. They hope it will make working with machine learning tools "iPhone-easy," as Moore put it in an interview.

Read More: When AI Goes Wrong, We Won't Be Able to Ask It Why

"Because commercial AI packages are closed and somewhat expensive, it really is a mystery to users what these programs are doing," Moore said. "If you feel like AI is something you can't understand, or don't have access to, it's something you may not try to use. Whereas if it's something you can play with for free and see what it's doing, you're more likely to take the time to learn about AI."

PennAI wants to make working with machine learning cheap and relatively easy for average people and organizations, including hospitals. There are AI workflow kits similar to PennAI on the market, but they either require a lot of work from the user (like Orange), or (like MLJAR) cost money to use.

The question of "explainability" around AI decisions has become increasingly important, as it's becoming clear just how fallible they are, and what the potential consequences could be. Researchers and journalists are uncovering all the ways that AI can reproduce human prejudices along racist and sexist lines. Some worry that AI, as long as it's used as a corporate secret sauce, could reproduce older forms of oppression such as economic redlining. For many, the chief problem with AI right now is simply that the people most affected by these programs' decisions are on the outside, looking in.


"That's one of the benefits of an open source AI package: everyone can see what it's doing"

"Commercial AI packages don't make the source code available, and you have no idea what's going on in there," said Moore. "That's one of the benefits of an open source AI package: everyone can see what it's doing."

PennAI is in the final stages of development, and may be posted to GitHub as early as the coming fall, Moore said. Until then, a paper posted to the arXiv preprint server on Monday lays out the vision. PennAI is a complete workflow suite with an easy-to-use graphical interface. It gives the user control over which AI models they use, and how to fine-tune them. It also explains processes to the user, and even visualizes the AI model's "layers" so you can see what's happening under the hood. Moore hopes to also implement a Clippy-like virtual assistant to guide users.

Obviously, there's still going to be a learning curve to machine learning, no matter how easy the interface is. But the first step to empowering people is giving them tools, and in that regard, PennAI is a great place to start.

Subscribe to Science Solved It , Motherboard's new show about the greatest mysteries that were solved by science.