The Institute fosters the design and development of techniques for privacy-focused machine learning
Artificial intelligence (AI) was once something you only heard about in science fiction — but not anymore. These days, AI is used for everything from computers playing chess to self-driving cars to robots you can actually interact with. But the development of AI has been largely decentralized and siloed, creating a dual sided problem: On one side, researchers don’t have access to the data they need. On the other, expanding access to data creates more possibilities for privacy breaches.
Want to share your news with your industry? Submit a press release.
Top 1000 Companies Ranked by Revenue – in Multiple Sectors & Countries.
The Private AI Collaborative Research Institute, originally established by Intel’s University Research & Collaboration Office (URC), aims to change that by bringing the private sector together with academics to create a collaborative environment in which researchers from a wide range of backgrounds can advance and develop privacy-focused technologies for decentralized AI.
“Machine learning has the potential to profoundly alter data analysis,” Professor Nicolas Papernot, of University of Toronto, tells Avast. “Machine learning requires diverse and large labeled datasets to perform well and make predictions that are useful. Yet, machine learning algorithms are known to leak private data they analyze in these datasets. I believe the center for Private AI will be instrumental to the design and development of techniques for privacy-preserving machine learning that enable institutions to analyze data in a more respectful manner.”
On the private side, Intel invited Avast and Borsetta to join the Institute. On the public side, eight universities with nine research projects were chosen to kick off the first year and will run for several years. They are:
- Carnegie Mellon University, U.S.
- University of California, San Diego, U.S.
- University of Southern California, U.S.
- University of Toronto, Canada
- University of Waterloo, Canada
- Technical University of Darmstadt, Germany
- Université Catholique de Louvain, Belgium
- National University of Singapore
“Collaboration is key because privacy is not a purely academic problem,” Professor Papernot says. “Privacy is a societal problem and as such we need a diversity of actors to ensure the solutions we develop advance the privacy of machine learning analysis in the real world. The center will bring together experts from a wide range of backgrounds, and I look forward to learning from our interactions.”
There are a few issues with current AI research that the Institute is aiming to address. Centralized research leads to silos and infrequent data collection, which means that solutions quickly become outdated, as data on the edges changes quickly. They’re also not very secure: Centralized training can be easily attacked by modifying data anywhere between collection and the cloud. And the current system can’t guarantee accuracy, privacy, and security.
“Industry and academic collaboration is key to tackle the big issues of our time, including ethical and responsible AI,” Michal Pechoucek, Chief Technology Officer at Avast, says. “As AI continues to grow in strength and scope, we have reached a point where action is necessary, not just talk.”
For more details about the center, and to follow exciting future research discoveries, please visit Intel’s website.