with Tonya Riley
The tech industry’s recent reckoning isn’t only happening in Washington. Universities are also evolving their approach to teaching and researching ethics.
The University of Notre Dame unveiled a new Tech Ethics Lab with the backing of IBM, which will conduct research on some of the most pressing dilemmas confronting the field — from how to build privacy into products to weighing how police should be able to use tools such as facial recognition without perpetuating racial biases.
The initiative, in which IBM is investing $20 million over the next decade, arrives at a critical moment as the 2020 election, the coronavirus and widespread racial unrest are prompting new debates about the role of technology in society. Another early project possibility: How technology could help people return safely to work during the coronavirus pandemic.
“This is a moment in history for us to consider the impacts technology is having on people and society,” Christina Montgomery, IBM chief privacy officer, said in an interview.
The lab signals that ethical issues could be raised and addressed much earlier in the development of new technologies.
Many of the current privacy or disinformation challenges at companies such as Facebook and Google were not at the forefront of their founders’ minds as they were creating those companies as students. But an increasing push to include more ethics resources on campus could change that.
“Rather than following the ‘ready, fire, aim’ approach sometimes used in developing new technologies, we hope to provide resources that allow developers and industry to create better, more responsible technologies that positively benefit society,” said Mark McKenna, acting director of the Technology Ethics Center at Notre Dame.
The accelerating development of artificial intelligence, quantum computing and other technologies is also putting pressure on universities to think about these issues at earlier stages.
A variety of academic disciplines could play a role.
The new lab is housed within Notre Dame’s Technology Ethics Center, which launched in fall 2019 to bring together people from different academic disciplines who are interested in the dilemmas confronting the industry. The center was founded on the belief that many tech ethics discussions are lacking from expertise of people in the social sciences, particularly from philosophy.
“A lot of technology ethics problems are really ethical problems that have existed a very long time, but they just have new applications because of the nature of technology,” McKenna said in an interview. “You can’t really make progress on them if you’re just looking at them from within technical disciplines.”
IBM began discussing launching the new lab with Notre Dame in the fall. The lab also plans to tackle problems related to artificial intelligence, in hopes that improving ethics and transparency will allow people to trust it more. Right now many of the algorithms are a black box, and people might not know why they are making sensitive decisions related to hiring or receiving a mortgage.
This shift goes far beyond Notre Dame.
Colleges across the country are shaking up the way they teach ethics to computer science students. In the heart of Silicon Valley, Stanford’s Center for Ethics in Society launched a large course two years ago that integrates computer science, public policy and ethics. About 250 students took the course at launch, and the university continues to offer it. The center is also working to embed ethicists within the faculty to develop assignments that would be included in the core curriculum for computer science students.
Robert Reich, the director of that center, said these efforts date back to 2015, but the recent problems in the industry have given the work greater urgency.
“Stanford is the site for many of the innovations of Silicon Valley; it also has responsibilities to invest in research and teaching on the ethical and societal dimensions of technological innovation,” he told me in an email.
The Stanford computer science department has also been including more requirements and options related to ethics in its degree requirements.
And Carnegie Mellon University established the Block Center for Technology and Society in 2018 that “examines the societal consequences of technological change, including that from automation and AI,” J. Michael McQuade, its vice president for research, said. Undergraduate students majoring in artificial intelligence are required to take an ethics course.
Correction: This story has been updated to correctly explain how Carnegie Mellon is incorporating ethics into its curriculum. The university created the Block Center for Technology and Society in 2018 to address these issues.
Our top tabs
California will start enforcing its broad new privacy law today, despite calls to delay because of the pandemic.
“For sure we will start enforcing on July 1,” California Attorney General Xavier Becerra told Rachel Lerman in an interview.
The law, which gives California consumers the ability to ask companies to stop selling their personal data to third-party advertisers or others, went into effect in January. Starting today, Becerra’s office can start sending businesses warnings that they might be in violation of the law and give them 30 days to address the issues before facing possible fines or litigation.
Becerra declined to say if he’ll start issuing warning notices today, or to which companies he might send such notices.
Under the law, consumers have to be proactive about reaching out to companies themselves. Becerra told Rachel that his office has received some reports complaining about how companies are responding. He also called people to always read the required disclosures on companies’ homepages when they visit a new website.
He acknowledged the controversy over forging ahead with enforcement as companies are dealing with the business implications of the coronavirus, but he noted that the law went into effect in January.
“It’d be very awkward to continue another six months as some companies were requesting where people would have rights, companies would have obligations, but no one would be there to make sure those rights are being complied with,” he said.
Facebook removed hundreds of accounts and groups associated with the anti-government extremist “boogaloo” movement.
The platform designated a part of the movement that advocates for armed violence as a “dangerous” organization, Rachel reports. The most recent takedown included 220 accounts, 28 pages, 106 groups and 95 Instagram accounts associated with the loosely organized militia movement.
Federal criminal charges of murder and conspiracy-related crimes against some of the members prompted the action, Facebook said.
The boogaloo movement, which calls for the armed overthrow of the government, has received national attention as its members have become a noticeable presence first at protests opposing stay-at-home orders and more recently at protests against recent police killings.
Facebook removed more than 800 boogaloo-related posts for violating its policies against inciting violence before yesterday’s sweeping takedown, the company said. Facebook originally banned violent boogaloo content in May, but as The Technology 202 reported, affiliated pages were still sharing violent content. Facebook continued to recommend boogaloo content as recently as last week.
Lawmakers have also pressured the company to be more aggressive in removing boogaloo content. “The prevalence of white supremacist and other extremist content on Facebook — and the ways in which these groups have been able to use the platform as organizing infrastructure — is unacceptable,” Sens. Mark R. Warner (D-Va.), Mazie Hirono (D-Hawaii) and Robert Menendez (D-N.J.) wrote in a letter to Facebook chief executive Mark Zuckerberg yesterday.
The letter also asks how Facebook prevents incitement of violence in public and private posts, and whether federal law should continue to protect the platform from civil liability for facilitating violence.
Biden’s campaign is demanding that Facebook take down Trump’s “hateful content” and misleading statements about vote by mail.
The campaign’s recent letter to the tech platform signals escalating tensions between the former vice president and the social media giant, Craig Timberg and Isaac Stanley-Becker report. The campaign raised concerns that Facebook has reworked its policies to accommodate Trump, as recounted in a recent Washington Post article.
“We are troubled by The Post’s confirmation that after President Trump’s tweets about the George Floyd protests, Facebook ‘chose to haggle’ with the White House, requesting edits and deletions, rather than taking a clear and transparent stand based on established policies,” Biden campaign manager Jen O’Malley Dillon wrote to Nick Clegg, Facebook’s vice president for global affairs.
Both Clegg and Facebook have denied the company changed policies to accommodate Trump.
The letter also requests that Facebook remove previous Trump posts that have claimed without evidence that voting by mail is a major source of electoral fraud. The letter alleges that the posts amount to voter suppression, which violates Facebook’s guidelines.
“We appreciate the concerns raised by the Biden campaign and look forward to sharing more details about our Voting Information Center, where more Americans will be alerted to accurate, authoritative information about voting than ever before,” Facebook spokesman Andy Stone said in response to the letter.
Alphabet-owned Verily will suspend some employee bonuses to fund diversity initiatives.
Some employees are criticizing the move, saying the company should fund the initiatives with a separate source of money, Hugh Langley and Rob Price at Business Insider report. Employees told Business Insider that the cuts were especially demoralizing after many worked overtime on the company’s coronavirus response. The change applies to spot, not annual, bonuses.
“The use of spot bonuses to subsidize social justice programs such as Healthy@Work for HBCUs [Historically Black colleges and universities], clinical trial recruitment of underrepresented populations, and an internal Product Inclusion group implies that these efforts are charity causes not worthy of their own investment,” employees wrote in a letter to management obtained by Business Insider.
The employees are also calling on the company to create a board made up of executives and employees to work on diversity and inclusion at the company.
“At this time, we think it’s important we put our money where our mouth is, and direct some of our discretionary funds — such as those typically used to fund a spot bonus program (which is separate and distinct of our annual bonus program) — to bolster our efforts to ensure our products and services are accessible to the people who need them,” Verily spokesperson Carolyn Wang told Business Insider. “This requires making a few small sacrifices, but why wouldn’t we do that?”
Rant and rave
Facebook is changing its algorithms to value news…again. The platform will start prioritizing original reporting in how news stories are ranked in its News Feed, Sara Fischer at Axios reports. The change was met with a chilly response from journalists who pointed out that this isn’t the first time the platform tried to pivot back to valuing news.
Journalist Alex Kantrowitz:
Ah, Facebook’s annual “we value original reporting” announcement is here: https://t.co/f8KInxT0WZ
— Alex Kantrowitz (@Kantrowitz) June 30, 2020
Bloomberg News’s Mark Gongloff:
Here we go again
Next up: pivot to video https://t.co/eFfXsvYjKm
— Mark Gongloff (@markgongloff) June 30, 2020
Others, such as researcher Jane Manchun Wong, said it was a step in the right direction for the platform:
This is long overdue. We should reward ones who’s done the original reporting, not the content farms who piece them together without necessarily giving credits while beating them by popularity
— Jane Manchun Wong (@wongmjane) June 30, 2020
But Syracuse communications professor Jennifer Grygiel pointed out that the change could have unintended consequences.
This major Facebook algorithm change is too close to the 2020 US election.
If this is done poorly, hyper partisan sites such as the Patriot Post, which uses pseudonyms on its masthead, could be boosted at a time when need legitimate news. Background > https://t.co/XOELFt9IUh https://t.co/T79HmcdfoS
— Jennifer Grygiel 🏳️🌈 (@jmgrygiel) June 30, 2020
The Federal Communication Commission officially designated Chinese companies Huawei and ZTE as national security threats.
The order finalizes the agency’s November vote to block U.S. companies from using federal funds to purchase equipment from Huawei.
“We cannot and will not allow the Chinese Communist Party to exploit network vulnerabilities and compromise our critical communications infrastructure,” FCC Chairman Ajit Pai said in a statement.
U.S. officials have long accused both companies of offering a backdoor for Chinese espionage. The Commerce Department put Huawei on a trade blacklist over national security concerns last year. Both companies have long disputed any involvement in espionage.
The new order locks out any carriers using the two companies’ equipment from more than $8 billion a year in federal funding for communications providers.
But the Rural Wireless Association says that the order, which goes into effect today, could disrupt the Internet for hundreds of thousands of rural Americans who their members serve. The trade group is encouraging the agency to wait to pull federal funding until rural carriers have a chance to apply for waivers exempting them from the order.
Microsoft and LinkedIn will provide free digital classes and job-seeking tools to 25 million people worldwide this year.
The coronavirus response initiative will provide access to tools on LinkedIn Learning, Microsoft Learn and GitHub Learning Lab, according to a news release. The initiative will include more than $20 million in grants to nonprofit organizations, $5 million of which is earmarked for organizations that serve communities of color in the United States.
More workforce news:
More than a dozen political leaders and activists are calling on Facebook’s Oversight Board to close a loophole for climate deniers.
Stacy Abrams, founder of Fair Fight, and Carol Browner, former EPA Administrator, are among signatories of a new letter. They want the board to ensure that climate deniers are subject to the company’s third-party fact checkers.
Last week, HEATED and Popular Information reported that Facebook allows its staffers to overrule the scientists to make climate disinformation ineligible for fact-checking by classifying it as opinion. The social network has a partnership with an independent organization of climate scientists as part of its broader third-party fact-checking process.
“The integrity of the Oversight Board is at risk. Mark Zuckerberg has refused to recognize that he must get the facts right on climate, and refused to acknowledge that climate denial on his platform is as dangerous a threat to future generations as any,” wrote the letter signers. “We are asking you to lead the charge to fix this. Until Facebook takes a stand, climate denial groups such as the CO2 Coalition will continue to exploit your platform to sow discord and put our nation’s health and security at dire risk.”
Mixer was a punchline to some, a home to others. Now, Facebook is vying for its streamers.
- Bozoma Saint John will join Netflix as chief marketing officer, Adweek reports.
Before you log off
Firework responsibly, folks: