
Federal Appeals Court: You Have a Constitutional Right to Film Police Officers in Public
On Friday, a panel of judges for the 3rd U.S. Circuit Court of Appeals unanimously ruled that the First Amendment protects individuals’ right to film police officers performing their official duties. The 3rd Circuit now joins the 1st, 5th, 7th, 9th, and 11th Circuits in concluding that the Constitution guarantees a right to record. No federal appeals court has yet concluded that the First Amendment does not safeguard the right to film law enforcement officers conducting police activity in public.
Friday’s decision involved two instances in which the Philadelphia police retaliated against citizens attempting to film them. In the first incident, a legal observer named Amanda Geraci tried to film police arresting an anti-fracking protester when an officer pinned her against a pillar, preventing her from recording the arrest. In the second, a Temple University sophomore named Richard Fields tried to film police officers breaking up a house party when an officer asked him whether he “like[d] taking pictures of grown men” and demanded that he leave. When Fields refused, the officer arrested and detained him, confiscating his phone and looking through its photos and videos. The officer cited Fields for “Obstructing Highway and Other Public Passages,” although the charges were dropped when the officer failed to appear at a court hearing. Geraci and Fields filed civil rights suits against the officers who interfered with their filming attempts.
Writing for the court, Judge Thomas Ambro agreed that both Geraci and Fields held a constitutional right to record the police—a right that officers violated in both instances. “The First Amendment protects the public’s right of access to information about their officials’ public activities,” Ambro wrote. This access “is particularly important because it leads to citizen discourse” on public and political issues, the most highly valued First Amendment activity. Thus, the government is constitutionally barred from “limiting the stock of information from which members of the public may draw.”
State Department Tries to Start “Fake Twitter Feud,” Understands Neither Feuds nor Twitter
With the 2017 G-20 summit kicking off Friday, the week ahead will, presumably be a busy one for the U.S. State Department. But as some in the institution prepared (we hope) for that important meeting, others were busily proving that they have no idea how anything on the internet works.
In Ars Technica, David Kravets reports that Mark Lemley, director of Stanford Law School’s Program in Law, Science, and Technology, received a peculiar email from an unnamed official in the State Department’s Bureau of Economic and Business Affairs. In that message, the official solicited Lemley’s help in producing a “fake Twitter Feud” over intellectual property, a “feud” that would, ideally, also involve organizations like the Motion Picture Association of America and the U.S. Patent and Trademark Office. If that seems confusing, here’s how the official explained it to Lemley:
The week after the 4th of July, when everyone gets back from vacation but will still feel patriotic and summery, we want to tweet an audacious statement like, “Bet you couldn’t see the Independence Day fireworks without bifocals; first American diplomat Ben Franklin invented them #bestIPmoment @StateDept” Our public diplomacy office is still settling on a hashtag and a specific moment that will be unique to the State Department, but then we invite you to respond with your own #MostAmericanIP, or #BestIPMoment. Perhaps it will [be] an alumni [sic] defending intellectual property in the courts or an article that your institution has produced regarding this topic.
We’ll get to the profound strangeness of that paragraph in a minute, but here’s the bigger picture: The State Department—which indirectly confirmed the veracity of this email to Ars Technica—wants to get people on Twitter to care about intellectual property law by encouraging institutions to bicker about the best examples of it. Kravets describes this plan as a “propaganda plot,” though some of Ars Technica’s commentators may be more accurate in describing it as an example of astroturfing—an artificial attempt to fabricate the appearance widespread grass-roots support for some issue or cause.
The good news is that the State Department is apparently too incompetent to pull off anything that devious, not least of all because it clearly understands neither feuds nor Twitter. Exhibit A of this sad, silly truth it its proposed “audacious” first tweet. The first problem is that, all other things being equal (they are not), at 145 characters it is too long to work as a tweet. You could, of course, fix that by removing the State Department’s deployment of its own Twitter handle at the end, but even that wouldn’t begin to resolve the real issues with the statement.
Let’s be clear: Bifocals are a problem here. “Bet you couldn’t see the Independence Day fireworks without bifocals,” the proposed tweet reads—a bet that the State Department would surely lose. Millions may wear bifocals, but, as far as I could tell, none of them were on the Washington, D.C. rooftop where I watched this year’s explosive display. If everyone around you wears bifocals, it seems possible—just maybe—that your friends skew a little … older. And if everyone you know is older, it’s possible—just maybe—that you should talk to some younger folks before trying to manufacture a viral social media moment.
There’s also the weirdness of Ben Franklin’s place in that tweet. Where this campaign seems designed to promote intellectual property ownership rights, Franklin was famously opposed to IP restrictions, writing in his autobiography, “[A]s we enjoy great advantages from the inventions of others, we should be glad of an opportunity to serve others by any invention of ours; and this we should do freely and generously.” Consequently, he never claimed to own the idea of bifocals, meaning that this “#bestIPmoment” is nothing of the kind.
Meanwhile, the larger issue is that the State Department doesn’t seem to want a “feud,” despite its claims to the contrary—it just wants people to use a hashtag that it apparently hasn’t even settled on yet. A proper “feud” response to that Franklin tweet might go something like, “Keep Franklin’s IP-hating name out of your mouth, you astroturfing idiot.” But instead the proposal calls for a game of barely germane one-upmanship, in which institutions would try to explain why their people were the best at IP stuff. The model here is presumably viral spats between celebrities, but the State Department’s ideal would be more like four D-list actors tweeting about the best features in their houses while otherwise remaining polite and respectful. Now that’s viral gold!
In the background, though, there’s a larger and more serious question. Where this Twitter campaign seems focused on American institutions, the State Department’s Bureau of Economic Affairs has a more global mission. It’s worth asking, then, what it was hoping to achieve with this plan—and who it was trying to reach. Given the incompetence with which the plan was executed, however, it seems likely that we’ll never know.
Related:
Turns Out That “Car-Eating Bus” From China Might Be a Scam
Remember that viral video from last summer showing a bus breeze by traffic by driving right over it? It was supposed to revolutionize traffic in the notoriously congested urban areas of China and help with the issue of smog.
Well, on Sunday, the Chinese government took to the social media site Weibo to announce the whole thing is a scam. It wasn’t fake news, exactly—the video itself was real, not doctored. But the video didn’t correctly portray how well the bus would work in real life.
On Aug. 2, 2016, China Xinhua News unveiled the bus to the world. According to China Xinhua News, the bus had just begun its maiden drive in Qinhuangdao, a city east of Beijing with a population of about 3 million. The bus purportedly operated by following a predetermined route and could carry about 300 people. The bottom was 7.2 feet off the ground, so cars under that height could go under it and keep driving (unless it was turning, in which case cars reportedly had to wait for the bus to finish). The internet loved the videos that emerged and labeled it a “car-eating” bus.
But the tide quickly turned.
Soon after the test run, Forbes called into question the validity of the project, noting that China’s state media had questions about the project, including how well it would actually perform, considering the test run was only 300 meters long and didn’t factor in a wide variety of details. Then in December 2016 CNN reported that the bus had been abandoned on the special tracks built for it and it was causing, not fixing, traffic issues. On June 21, the bus was finally relocated, and officials announced plans to remove the special tracks on which it ran by the end of June, according to Quartz.
Now the project is running into legal troubles with its investors, 72 of whom have filed lawsuits against two people who run an online investing platform, Huaying Kailai and Bai Zhiming, according to Southern Metropolis Daily, a Chinese language newspaper.
The two raised about $1.3 billion for the project, with potential investors having to pay a minimum of $150,000 as a buy-in. The investors were promised a 12 percent return on their contribution. Police in China have arrested Zhiming, who also bought the patent for the design, along with 31 of his employees, NPR reports.
But rather than running from the scene of the crime, Zhiming said the bus would be relocated to another city after being moved from its abandoned post, Quartz reported. This guy doesn’t seem to know when to quit.
California Is Thinking About Giving “Reasonable” Internet Access to Youth in Juvenile Detention
California’s state legislature is considering a bill that would give youth in juvenile detention facilities reasonable access to computers and the internet for educational purposes and to keep in contact with outside support systems. The bill passed the Human Services Committee in the California Senate on June 27. The next step is for the bill to be heard by the Senate Public Safety Committee on July 11.
Many juvenile detention facilities in California already have wireless internet connections, and some allow their inmates to Skype approved contacts, according to Jay Jefferson, the legislative director for the bill’s sponsor, Mike Gipson. But most juvenile inmates have very limited access to the internet, only using it if they need to take tests school or psychological tests online, according to Ike Dodson, a public information officer with the California Department of Corrections and Rehabilitation. Dodson wrote in an email that some juvenile detention facilities use intranet for school-related tasks, and they are looking at ways to download content so prisoners can view websites without being connected to the internet.
The bill has support from big names in the tech industry, including Facebook and the Electronic Frontier Foundation. In a letter of support, the EFF wrote, “Computer literacy and computer skills are crucial to development in the modern era. … Many facilities are located in remote areas, placing youth far from their homes, accommodations should be made using modern technology to allow detainees to maintain meaningful relationships with their families.” Gizmodo reports that Facebook’s Anne Blackwood, the head of public policy for the western states, wrote a letter of support making similar arguments. “Computer literacy and the ability to communicate with technology are integral to living in today’s society,” she wrote.
Facebook for the (adult) incarcerated has long been a point of contention. In 2011, South Carolina tried to pass a bill that would add more prison time to a felon’s sentence, along with a $500 fine, if he or she created or used a Facebook account while incarcerated. The following year, South Carolina actually did make using social media in prison a Level 1 offense—a reprimand typically reserved for serious violations of prison policies, like violence.
The EFF has spoken out against placing inmates in solitary confinement for their use of Facebook and other sites, noting that officials can issue separate violations for each day of usage. “If a South Carolina inmate caused a riot, took three hostages, murdered them, stole their clothes, and then escaped, he could still wind up with fewer Level 1 offenses than an inmate who updated FaceBook every day for two weeks,” it wrote.
By threatening violators with solitary confinement, South Carolina takes things to the extreme. But it isn’t alone in cracking down on social media for prisoners. In 2016, the Texas Department of Justice announced that family members can’t run prisoners’ accounts from the outside. Facebook has even partnered with California prisons to take down the profiles of prisoners. But there’s a difference between adults and minors and children, and the proposed bill in California seems to recognize that.
The bill wouldn’t just affect the 23,000 in juvenile detention in California; it would also create regulations for the nearly 56,000 who were in the foster care system as of 2015. The legislation would ensure that minors in foster care have reasonable internet access, regardless of what home they’re placed in. The bill also gives them the right to a variety of options for independence, like maintaining an emancipation bank account. Essentially, this bill would expand the ability for currently disadvantaged minors to have more opportunities for success.
Additionally, the bill would also allow for juveniles to make at least two free phone calls within an hour of arriving at a facility after being arrested and would mandate that they be able to maintain frequent contact through calls if they desire. Calls from prison can be notoriously expensive, and the Obama administration tried unsuccessfully to regulate the cost. But that wouldn’t necessarily mean that internet use would be just another way to squeeze money out of those in juvenile detention. Jefferson wrote that the intent of the bill is to not have the families and children bear the cost. In facilities that currently provide internet access, families don’t have to pay, Jefferson says. But he notes that he can’t guarantee a specific facility won’t charge for a particular service. He added that the bill was intentionally written in broad language so that there would be a variety of options on how to implement it.
The Canadian Supreme Court Orders Google to Make a Worldwide Change
On Wednesday, Canadian company Equustek Solutions Inc. convinced the Canadian Supreme Court to temporarily prevent Google from displaying worldwide the sites of a rival company, in a case that may prove key in deciding how global search engines apply their policies globally.
The matter started as a case between Equusteck and Datalink Technologies Gateways Inc. over a product that the latter had rebranded and sold as its own. Equustek requested that Google remove search results for the other company’s websites while the case between them was settled, and Google did so, but only on Google.ca, the Canadian version of the site. Equustek then sought an order against Google to prevent the search engine from displaying its rival’s sites worldwide. The Canadian Supreme Court agreed in a 7–2 decision.
A.I. Could Help Combat Modern Slavery, if Humans Don’t Mess It Up
Though it’s been over it’s been more than a century since the end of the trans-Atlantic slave trade, the practice of using forced human labor has proven to be a stubbornly modern problem. According to 2016 estimates by the Global Slavery Index 45.8 million people are currently enslaved worldwide
Despite clear laws banning such exploitation, these numbers have remained high as practices continue to evolve. One of the biggest obstacles those who want to combat forced labor face is the difficulty identifying and accessing the places where it’s happening—which are often in remote or unstable areas.
But now some human rights activists may have a new tool to track some of the most notorious sites of slavery in the world: artificial intelligence. The technology holds promise to vastly expand and accelerate their important work. However, as experts point out, its implementation doesn’t come without potential pitfalls.
Netizen Report: Venezuela’s Conflict Moves From the Streets to the Internet
The Netizen Report offers an international snapshot of challenges, victories, and emerging trends in internet rights around the world. It originally appears each week on Global Voices Advocacy. Afef Abrougui, Ellery Roberts Biddle, Marianne Diaz, Don Le, Elizabeth Rivera, Laura Vidal, and Sarah Myers West contributed to this report.
On the night of June 28, internet users across Venezuela reported that multiple major web and social media sites had gone dark. According to the digital rights organization Venezuela Inteligente:
DNS servers of the State’s Internet Service Provider CANTV were not responding to the DNS requests on Facebook, Twitter, YouTube, Instagram and Periscope, preventing users to access these platforms.
An hour later, users were able to access those sites again. This wave of censorship comes in a moment when Venezuela’s government is facing record levels of public opposition and unrest, and a rapidly escalating economic crisis that has led to widespread hunger and threats to public health countrywide. Internet access has significantly deteriorated since May 2016, when the government declared a (still ongoing) state of emergency and officially authorized online content filtering. In May 2017, the Index on Censorship published evidence of 41 websites being blocked in the country.
While authorities may see social media censorship as a useful short-term mechanism for limiting speech that sparks public unrest, they too rely on internet access to communicate with their citizens.
Just last week, Venezuelan President Nicolas Maduro ordered officials to investigate Twitter employees in Venezuela for suspending the accounts of 180 government employees and chavistas, as supporters of former president Hugo Chávez are known. Maduro vowed to “unmask” the identities of the Twitter officials responsible and to create thousands of new accounts to continue the “battle on social media.”
One Vietnamese blogger jailed, another forced into exile
On June 29, well-known Vietnamese human rights blogger and community leader Nguyen Ngoc Nhu Quynh (who blogged under the name Mother Mushroom) was sentenced to 10 years in prison after a Khanh Hoa district court convicted her of distorting government policies and defaming the Communist regime both on Facebook and in interviews with foreign media outlets.
Vietnamese authorities also forced professor Pham Minh Hoang into exile on June 24. Hoang was stripped of his nationality in May and forcibly removed from his house on June 23. He said he was detained for 24 hours before authorities forced him onto a plane to Paris. He is now separated from his wife and young daughter and unable to take care of his disabled older brother. A blogger and member of Viet Tan, the Vietnamese democratic political movement, Hoang was detained for 17 months beginning in 2010. He has conducted trainings on cybersecurity, human rights, and leadership skills since his release. “I still have a little hope, one day, to come back to live and die in Vietnam,” he told Associated Press.
Independent Cuban journalist arrested, accused of spreading “false news”
Cuban journalist Manuel Alejandro León Velázquez was arrested by Cuban state security officers in Guantánamo province and detained for two days. Police also confiscated electronic equipment; mobile phone; several hundred dollars worth of U.S., EU, and Cuban currency; and his passport and press card. León Velázquez is a reporter for the independent news website Diario de Cuba and has covered hurricane relief efforts and infrastructure challenges in eastern Cuba. He has been summoned for a meeting with police on July 3, who charge that he has been distributing “false news.”
Moroccan web editor detained, awaiting trial
A video journalist and the director of the news website Rif24 was arrested on June 6 in the Rif region of Morocco, where protests focused on the declining economy, poor infrastructure, and government corruption have been taking place since October 2016. Mohamed al-Asrihi is currently in pre-trial detention on charges including practicing journalism without official accreditation and receiving foreign funding from “separatists” abroad. He is reportedly being held in solitary confinement in Casablanca’s Oukacha Prison.
At Thailand’s behest, YouTube censors Charlie Chaplin’s Great Dictator
A video clip of Charlie Chaplin’s The Great Dictator has been blocked on YouTube at the behest of the military-backed government of Thailand. Internet users reported on June 24 that the video clip with Thai subtitles was inaccessible on YouTube. The page for the video instead displayed the standard message: “This content is not available on this country domain due to a legal complaint from the government.”
The 1940 film The Great Dictator parodied the rise of Nazi leader Adolf Hitler. June 24 is the day when Thailand commemorates the 1932 revolution, which ended the country’s absolute monarchy.
Chinese netizens’ days using VPNs may be numbered
Internet users are anticipating that the majority of virtual private network apps for individual use will be inaccessible in mainland China by July 1. Whisperings of a state ban on unauthorized VPNs spread widely on Twitter and Weibo after the popular VPN service provider Green announced that the company would cease operations by July 1. That date is sensitive because it marks the 20th anniversary of the transfer of Hong Kong from British to Chinese power. However, there is not yet conclusive evidence that there is a direct connection. The move may also indicate a forthcoming ban has been expedited. China’s Ministry of Industry and Information Technology had announced that it will ban “illegal services,” including unauthorized VPNs, in March 2018.
Kenyan officials promise not to shut down the internet on election day
After Gabon and the Gambia shut down networks during recent elections, inquiring minds want to know whether Kenya will follow in their footsteps during upcoming elections on Aug. 8. In a press statement, Kenyan Information and Communications Technology Cabinet Secretary Joe Mucheru affirmed, “We are a digital country and that is not our intention. It is not even a remote fall back position.”
Netizen activism
The U.N. special rapporteur on violence against women is soliciting input for a report on online violence against women and girls. Submissions should be sent by Sept. 30, 2017.
The Most Important Lesson From the Leaked Facebook Content Moderation Documents
“Napalm Girl.” Philando Castile. Donald Trump’s hate speech. Fake news. The Cleveland murder. If you’re a living, breathing, clicking person, you’ve probably heard about these moments in which Facebook faced controversy for removing (or not removing) users’ postings. Beyond the substance of the speech that is or isn’t being taken down, part of the controversy derives from the realization that Facebook is constantly policing our speech at all. This process, called content moderation, has been happening at Facebook since 2008, and until recently how and what Facebook moderates has been largely opaque.
Of course, every social media post is subject to the terms and conditions of that site. At Facebook these are called Community Standards. An example of a community standard is “We restrict the display of nudity ...” But enforcing that standard is much more complex than the straightforward sentence might suggest, and until recently, the fine details of Facebook’s approach were kept under wraps. When a picture or video is flagged for violating a Facebook standard on nudity, it’s sent to a human content moderator to review. That moderator uses a number of intricate rules, internally developed by Facebook, to determine whether the content should be removed. These internal rules are what leaked documents published in May by the Guardian—more than 100 pages of internal Facebook content moderation rules—reveal. The documents are an incredible tool for understanding how the social network conceives of hate speech, violence, and sexual content, as well as what they find permissive.
For example, exceptions to the nudity standard state that pictures of “ ‘handmade’ art showing nudity and sexual activity is allowed but digitally made art showing sexual activity is not” and that “[v]ideos of abortions are allowed, as long as there is no nudity.”
This week, ProPublica published an in-depth investigatory article explaining how and why Facebook makes the decisions it makes in policing user content. (Full disclosure: I was interviewed for the article, and it cites my previous work on content moderation.) The article is the latest peek into a private process that has slowly become more transparent over the last few years. Some of that has transparency into Facebook’s content moderation process has been involuntary. In 2013, Adrian Chen published a few pages of a leaked training manual used by third-party content moderators in the Philippines hired by Facebook. This was most of what we know about Facebook’s internal policies until the Guardian published the leaked documents.
Not all of these disclosures have been involuntary, however. Just this week, Facebook published a blog post carefully detailing and explaining how those working in content moderation and moderation policy deal with the “hard questions” its employees have to answer in policing speech on a global platform with more than 2 billion users. “What does the statement ‘burn flags not fags’ mean?” Richard Allan, vice president of European, Middle East, and African public policy at Facebook, wrote as an example of the difficulty of sussing out hate speech without context. “[I]s it an attack on gay people, or an attempt to ‘reclaim’ the slur? Is it an incitement of political protest through flag burning? Or, if the speaker or audience is British, is it an effort to discourage people from smoking cigarettes (fag being a common British term for cigarette)?”
Allan’s post, the ProPublica piece, and the hundreds of pages of documents in the Guardian leaks demonstrate another central fact about content moderation: that many of these questions would be difficult even for a constitutional lawyer. They also highlight where Facebook has been able to use algorithms or automation to moderate content, and where the questions are so complex and nested in constantly changing social norms that its likely decades before AI will be able to replace people on much of this moderation work.
That is one of the most important realities of these new disclosures: The things we’re upset about—for example, why Facebook’s “rules protect white men from hate speech but not black children”—is not a thing you can fix with an algorithm or new AI, as Jacob Brogan recently wrote in Future Tense. It might not even be a thing you can fix with new policy. Now that the curtain is pulled back, it turns out that these decisions we’re so unhappy with are just really hard problems, and that humans are making them.
Does that reality alter what we expect from Facebook? Not necessarily, but it should change how we respond to it in order to get real change, and how we focus our ire when bad decisions are made. As my colleague Margot Kaminski and I wrote earlier this week, “unlike a government, Facebook doesn’t respond to elections or voters. Instead, it acts in response to bad press, powerful users, government requests and civil society organizations.” Thus, it’s the job of civil liberties groups and user rights groups to take “advantage of the increased transparency to pressure these sites to create policies advocates think are best for the users they represent.”
And there’s hope that Facebook will listen. Last week the company released a new mission statement that prioritizes giving “people the power to build community and bring the world closer together.” It’s a much more human slogan than the prior sterile, goal-like motto to “make the world more open and connected.” Perhaps that is reflective of not only us humans who use Facebook everyday, but the humans who work to bring it to us.
Future Tense, Tiempo Futuro
Future Tense is excited to announce a new partnership with Letras Libres, one of the most influential ideas magazines in the Spanish-language world, founded in 1999 by Enrique Krauze. Letras Libres, based in Mexico City and publishing a separate edition in Spain, will publish a translated Future Tense article each week, starting with Cory Doctorow’s essay on the power of science-fiction not only to predict, but to shape, the future. The collaboration builds upon a pre-existing content-sharing agreement between Slate and Letras Libres.
No country’s experience—or future—is as intertwined with ours as is Mexico’s, and within Mexico there is no smarter or more discerning source of rigorous thinking about the current state of the world and its future than Letras Libres.
"We are very happy and look forward to collaborating with Slate, Arizona State University, and New America on their Future Tense project, jointly exploring how technology impacts the way we live, and our future. This is a pressing question for our readers and for our magazine, which is why publishing Future Tense content will greatly benefit our Spanish-speaking audience," Daniel Krauze, editor of Letras Libres, told me.
The future of Future Tense is increasingly global, as we seek to engage new audiences. When Arizona State University, New America, and Slate first launched Future Tense, the collaboration was conceived of as a bridge between academics, Silicon Valley types, and policymakers seeking a better understanding of emerging technologies. But over time, our “Citizen’s Guide to the Future” has become a broader inquiry on what technological and scientific breakthroughs mean for the rest of us. And when we talk about “citizens” and “the rest of us,” it makes little sense to talk about any one nationality in isolation.
If there’s one thing we know about the future, it’s that we’re all in it together. And for that reason, Future Tense will be looking for other partners of the caliber of Letras Libres around the world. Stay tuned.
New Zealand Could Require Students to Learn to Code
The New Zealand Ministry of Education has introduced a revised curriculum that would require schools to teach children how to program computers before the time they reach high school, Radio New Zealand reports.
On Wednesday, the Ministry of Education released a draft of the initiative that lays out plans to further incorporate digital technologies into the technology-learning area curriculum. The government is now accepting input from the public about the proposed change.
The draft spells objectives for where students should be at certain points in their education. For instance, by the end of Year 10, students ought to “be able to use a range of software to develop and combine digital content to create an outcome,” and by the end of Year 13, the final year of secondary education, they will be able to “effectively apply a refined, iterative development process to develop quality, fit-for-purpose digital outcomes that meet design specifications.” They hope to incorporate these new strands into the education system by the beginning of 2018.
According to Radio New Zealand, Paul Matthews, the chief executive of New Zealand IT Professionals, said that the digital divide is no longer about who has technology and who doesn’t—instead, it’s about those who know how to use technology versus those who don’t.
But does everyone really need to learn how to code? In a 2013 piece for Slate, Chase Felker expressed concern that the narrative of teaching everyone some coding will lead to people overexaggerating their skills. Perhaps worse, he wrote, students could end up with memory-based knowledge instead of a flexible understanding of coding. He also argued that society divides labor so that not everyone needs to know how to create everything they use—and that technology should be the same.
In a related piece in Quartz, Idit Harel discusses the concept of “pop computing,” a superficial level of coding that doesn’t really educate people to the level that they need.
But unlike Felker, Harel thinks the solution is instead to go to an even deeper level of education when it comes to teaching computing. Harel argues that it should be mandatory in schools, on par with reading and writing.
In a 2015 poll from Gallup, 52 percent of American students surveyed said they thought it was “somewhat likely” that they will have a job in which they will need to know some computer science, while 38 percent said it was it “very likely.” So in total, 90 percent think there’s a decent chance they will need computer science.
The Gallup report said that in the U.S., many schools don’t teach computer sciences because they have limited space in their schedules for classes that don’t play a role in testing.
But making it mandatory to teach kids to code will come with its share of implementation issues. For instance, before they can instruct their own students, the teachers will first need to be taught more about computer science.
Radio New Zealand reports that Steve Sexton, a senior lecturer at Otago University, has concerns about who will make this a reality in the classroom and the cost of the project.
“It’s not just putting the technology into the school, but it’s got to be effectively integrated or effectively implemented or it’s just going to be another headache for teachers to deal with,” he said to Radio New Zealand.
