The Absurd Excuses Countries Give for Shutting Off Internet Access
In June, officials in Jammu, India, shut down mobile internet in advance of a traditional wrestling tournament. Of course, the half-naked competitors weren’t planning to grapple with their devices in hand, but organizers were fearful of repeating what happened in 2014. That year, riots had broken out because the site of the tournament was staged on what local residents claimed was an old burial ground. The government believed that shutting down the internet would stop people from inciting each other to violence once again. Later that day, the different factions agreed upon a process to determine how to move the tournament to a new site, and officials turned the internet back on.
This bizarre incident highlights the challenge of fighting internet shutdowns. From the brief disruption in Turkey of social media services during the failed coup, to the blocking of WhatsApp earlier this week in Brazil, internet shutdowns are taking more and more complex forms. Frequently, communications services are cut off shortly before gross human rights violations take place. In a separate incident later in June in nearby Kashmir, India, for example, a journalist was unable to get online after officials once again cut off mobile internet in the region. When he came back online he learned eight people had been killed during the blackout. Businesses lose money and emergency services can’t do their jobs. The Software Freedom Law Centre in New Delhi has recorded 30 disruptions in India over the past three years. In 2016 alone, the advocacy group Access Now, where I work, has documented nearly 30 shutdowns worldwide. This week in Ghana an official reiterated his intention to block social media during its election—the most critical moment in a democracy—four months ahead of the vote.
One of the strangest yet also most pernicious forms of internet shutdowns involves school exams. Over the past six months, four countries have shut down the internet because of school exams, purportedly because both students and underpaid teachers looking to supplement their income have leaked exam answers online. Iraq blocked the entire internet to keep sixth-graders from cheating—in a country whose very existence is under threat from ISIS, and where information can mean the difference between life and death.
In Gujarat, India, officials blocked mobile internet during a February accounting exam because of its “sensitive nature,” an action with drastic effects because the majority of people in the region use their mobile devices to go online. Algeria, meanwhile, blocked all websites except Wikipedia in June to prevent cheating on the International Baccalaureate exams, according to the civil society group Social Media Exchange, and Ethiopia soon followed suit by blocking social media this month because of university entrance examinations.
At first glance, these measures sound ridiculous (and I will argue that they are.) But last week I spoke in New York with a man who had traveled from Ethiopia to the United Nations to push for human rights, and he was surprisingly sympathetic to his government’s decision to block social media. The cheaters were violating the rights of those students who followed the rules. He had a point: I remember preparing for exams and I would have been devastated to learn that dishonest people had ruined my chances. Of the 31 suspected leakers of exam results in Algeria, several were teachers. And Social Media Exchange notes that students are desperate to get ahead in a region that faces 25 to 30 percent youth unemployment.
These heavy-handed responses to exam cheating point to new challenges facing educators. Administrators may need to prohibit mobile devices in testing centers, resort to old-school calculators, install secure lockers, or develop new forms of proctoring. But trying to stop cheating by shutting down the internet is not the solution. In Geneva, the U.N. Human Rights Council declared unequivocally recently that intentional disruptions of the internet harm human rights. Companies like Facebook, Google, and Microsoft have also weighed in, speaking through the Global Network Initiative, with a strong statement that was endorsed by telecommunications companies such as AT&T, Vodafone, and Orange. Even the GSMA—one of the world’s largest technology associations—has developed strict standards for what it calls Service Restriction Orders. We at Access Now are working with nearly 90 organizations from 41 countries on the #KeepitOn campaign to push back on internet shutdowns. And it’s important for officials like the U.N. special rapporteur on the right to education, Kishore Singh, to speak out against shutdowns around examinations.
No one likes cheaters or multiple-choice examinations, and everyone has their own high-stakes-testing horror story. (In my case, it was sitting for the bar exam using an old laptop with the Y2K problem that regularly reset my computer clock.) But when it comes to accessing to the internet, the stakes are even higher than those of the entrance exams.
Tesla's Insane, Magnificent New Master Plan
Tesla is losing money. It is the subject of investigations by three federal agencies. Its vaunted "autopilot" technology recently drove a man into a semi truck. Meanwhile, the company is racing its own hyper-aggressive timetable to complete the world's largest electric battery factory and deliver 400,000 of a car that it hasn’t yet begun to produce.
For any other company, this would be the time to bear down, eliminate distractions, and narrow its focus to a few core goals. For Tesla, of course, it’s just the opposite.
On Wednesday night, Elon Musk announced a new master plan for his company. It is the philosophical successor to his original master plan, published 10 years ago when few had heard of Tesla and fewer cared. If that first plan seemed implausibly audacious, this one borders on schizophrenic—a compendium of goals so futuristic and disparate that it would be a marvel for any company to achieve one of them, let alone all. They include (deep breath):
- Building at least four all-new models: a "new kind of pickup truck," a compact SUV, a semi truck, and a bus-like mass transport vehicle that delivers its passengers from door to door. They'll all be fully electric, of course.
- Developing and implementing a fully autonomous driving system that will require no human involvement. The system will have such redundancy that a failure of any part of the driving system will not compromise its ability to navigate safely.
- Creating a car-sharing platform through which Tesla owners can, at the tap of a button, rent out their self-driving vehicle to a “Tesla shared fleet” when they’re not using it. Others can then summon the car for a ride, generating income for its owner which can help to pay off the price of buying it.
- Merging Tesla and SolarCity, the country's largest solar power company, and together developing a seamlessly integrated system that can both capture and store solar power on your rooftop, turning your home into its own energy utility. And then “scale that throughout the world.”
Not even cracking the top four objectives in the new plan is Musk’s recently stated intention to essentially reinvent the mass production process, developing a heavily automated factory that can churn out cars five to 10 times more efficiently than before. In other words, Musk writes, Tesla is designing “the machine that makes the machine—turning the factory itself into a product.”
And that’s not to mention all of the company’s previously existing goals, such as ramping up production of the Model X SUV, beginning mass production of the Model 3 sedan, and completing and refining its Nevada “Gigafactory” for electric batteries. (Yes, that’s an altogether different factory from the Fremont facility where it’s rethinking vehicle production.)
The master plan is absolutely worth reading in full. It’s a document that will undoubtedly be revisited often in the years to come, whether it’s to marvel at its foresight or to reflect on its folly.
Therein lies the fascination in following Musk and his companies, Tesla in particular. You could react to a blog post like this as the unhinged ravings of a man whose grand delusions have finally taken him off the deep end. Or you could take it as the gospel of a modern-day techno-prophet whose visions will shape the future of energy and transportation. You’d be relatively equally justified in either view.
If there is a middle ground, it’s to recognize just how much would have to go right in order for Tesla to achieve all of these goals—and how unlikely that seems—while acknowledging that Musk has made a career out of setting, and often achieving, goals that appear almost impossibly ambitious to just about everyone else. That Tesla is on the verge of fulfilling the bulk of Musk’s original master plan—which included building an all-electric sports car, using the money from that to build an all-electric sedan, and using the money from that to build an affordable electric vehicle to conquer the mass market, while also providing new options for zero-emission electric power generation—is incredible. Yet past results can never guarantee future success, and the more Musk promises, the greater the chances that he’ll eventually fall short.
Already analysts are deriding Musk’s latest plan as “overly ambitious” and “short on detail,” among other epithets. But Elon Musk cares about skeptics and critics the way a honey badger cares about King Cobras. They might occasionally sting him—as evidenced by the overly defensive and personal responses he occasionally publishes—but they don’t deter him from his path.
For years Tesla followers have argued over just what kind of company it really is. Is it a car company? A tech company? An energy company? The answer, clearly, is all of the above. But on some level, Tesla’s business is less about the “what” than the “how.” Whatever Tesla produces, and whatever markets it enters, it will rethink the relevant products and processes from “first principles,” in Musk’s words, in pursuit of radical advances that the incumbents had never dared to deem possible. The results can only be spectacular success or spectacular failure—and so far it’s been mostly the former. So far.
The Trauma of Digital Warfare, on Hacking Mr. Robot: Week 2
Slate and Future Tense are discussing Mr. Robot and the technological world it portrays throughout the show’s second season. You can follow this conversation on Future Tense, and Slate Plus members can also listen to Hacking Mr. Robot, a members-only podcast series featuring Lily Hay Newman and Fred Kaplan.
The third episode of Mr. Robot (don’t forget that the premiere was two parts) dropped on Wednesday night, bringing hacker protagonist Elliot Alderson deeper into his madness and despair. It’s unclear how long the show will keep Elliot isolated and too confused about reality to actually, you know, do things, but it seems like this episode was the complication before some resolution.
Knowing the show, that resolution will almost certainly be complicating and strange. But Elliot is a talented hacker—he can’t live a remote, analog life forever. Meanwhile, the fallout from Fsociety’s massive hack of ECorp continues. People close to Fsociety keep getting murdered, an FBI agent is poking around, and ECorp CEO Phillip Price takes an interest in Elliot’s childhood friend Angela Moss, who now works in communications for ECorp.
This week’s episode didn’t have technology driving the plot the way Mr. Robot episodes often do. It was more about exploring the parallels between our digital selves and our interior selves—parts of us that are very real, but don’t have a physical manifestation. Season 2 also seems to be meditating on the impacts of digital warfare. Though there’s no violent combat, Elliot still seems traumatized by the display of Fsociety’s power and his own. Or is it Mr. Robot’s power?
Hacking Mr. Robot is a members-only podcast series. To listen to Fred and Lily’s seasonlong discussion of the technological world the show portrays, join Slate Plus. You can try it free for two weeks.
Why It Makes No Sense to Judge Groups of People by Their Histories of Invention
On Monday, Iowa Rep. Steve King, appearing on MSNBC, asked which nonwhite “subgroups” had contributed more than white people to “civilization.” King’s comments came about a week after the hashtag #WhiteInventions appeared on Twitter, spurring some of the most savory types of Twitter users to brag about the things that white people had given the world.
Future Tense Newsletter: The Practice of Privacy
Greetings, Future Tensers,
The juncture of technology and privacy is a confusing one, a dense weave of forces and factors that can be difficult to disentangle. That’s the (seemingly accidental) lesson of Privacy, a play starring Daniel Radcliffe that opened this week in New York City. Cybersecurity professor Josephine Wolff writes that it disappointed, partly because it conflates personal privacy with corporate data collection and government surveillance, collapsing a handful of distinct problems into one another. To the extent that Privacy succeeds at all, Wolff suggests, it’s by encouraging the audience to think “about the data stored on their phones.”
Digital rights activist April Glaser offers a more nuanced look at such issues in an article on information security best practices for political protesters like those at the Republican National Convention this week. There are, however, a host of other ways in which many of us still happily cede information about our digital selves, not least of all through the surveys that frequently pop up on websites, which Rachael Cusick investigated for Future Tense. Want to know how to protect yourself from these invasive questionnaires? The best way’s probably to not take them.
Here are some of the other stories we read while getting excited about the new season of Mr. Robot:
- Education: The Bedtime Math Foundation’s app aims to help parents incorporate word problems into their children’s nightly routines.
- Cartography: A map library in Maine is putting its gorgeous collections online—and revealing the complex ambiguities of digitization in the process.
- Gaming: Niantic built Pokémon Go on a foundation it established with its earlier game Ingress. A veteran of that cult hit offers some tips to players of the newer title.
- Conservation: The U.S. Fish and Wildlife Service is using drones to deliver vaccine candies to prairie dogs in order to protect ferrets in the great plains. It sounds goofy, but it just might work.
- What concerted steps should Canada, Mexico, and the United States take to ensure that North America will become the world’s leading energy power for generations? Future Tense and the Wilson Center’s Canada Institute invite you to join them in Washington, D.C., at noon on Tuesday, July 26, for a conversation on what it will take for North America to fulfill its energy potential. For more information and to RSVP, visit the New America website, where the event will also be streamed live.
Checking for blue check marks,
for Future Tense
You Can Now Apply for a Blue Checkmark on Twitter, if You’re Into That Kind of Thing
Twitter turned 10 years old in March, a landmark that has been followed by an onslaught of changes to the platform including longer tweets and an algorithmic timeline. But Tuesday’s announcement is by far the biggest yet: Account verification will soon be opened up to the masses, allowing anyone with a Twitter handle to apply for the coveted blue checkmark usually reserved for high-profile individuals such as celebrities, politicians, and members of the media. From the press release:
"We want to make it even easier for people to find creators and influencers on Twitter so it makes sense for us to let people apply for verification," said Tina Bhatnagar, Twitter's vice president of User Services. "We hope opening up this application process results in more people finding great, high-quality accounts to follow, and for these creators and influencers to connect with a broader audience."
Back in 2009, Twitter was the first social network to enable the verification of an account—but in the seven years that followed, the process of getting your account verified has remained vague, complicated, and often unsatisfying. As a social media editor, I’ve been witness to the extravagant opaqueness of the process—it took close to three months before I got a response to my request to be verified, and another three before I was actually given a checkmark—and even then it was completely out of the blue. If you ask people in the media how and when they got verified, the general answer will be that it was a crapshoot.
Some people dismiss the checkmark as being an unnecessary division of celebrities and normal people, or of being an elitist way of saying you’re too good to interact with everyone else. But being verified isn’t just a status symbol for those roughly 187,000 individuals that have been deemed important by the Twitter gods. The real, tangible value of being verified is that it grants you the option to filter out the trolls and spam accounts that often plague those in the public eye. Without the option to shut that stream off, one tweet going viral can kick off an endless stream of commentary and hate speech. While for some odd, attention-seeking ducks this is an exhilarating thing, it can also be overwhelming and subsequently has been the basis for several recent high-profile exits from the Twittersphere. With the recent launch of Twitter’s off-platform app Twitter Engage, a checkmark-carrying individual is now also able to customize her view of her own content. It also lets users track what sort of influence they’re driving among their followers—an added benefit for someone trying to build a public reputation without the backing of a large institution or an agent.
It doesn’t come as a surprise, then, that Twitter would seek to remedy one of its most haranguing functions in the most democratic way possible: by opening it up to everyone, and allowing anyone to submit an application for verification. There will undoubtedly be a whole new group of people who are unhappy with how the process rolls out and, very likely, with the time it takes for an application to be considered.
To apply, follow the steps outlined on Twitter’s FAQ page, which include describing your impact on the field you work in, a company affiliation, and in some cases providing government ID. If you’re rejected right off the bat you can apply once more 30 days after your denial. There’s no indication of just how generous Twitter will be with broadening the scope of their verified users, but at least Twitter’s trying to make it better for everyone, and make it a little easier to prove yourself in an arena where it’s very, very easy to get drowned out if you don’t make enough noise.
The good news is that Twitter’s definitely going to create some jobs out of this. In the coming months, we’re absolutely going to see a newly generated niche market of professional Twitter verification application writers claiming an absurd success rate with their custom-tailored applications. On the other hand, that also means that there will be a small, sad group of verification application readers in a room somewhere at Twitter HQ who have to read all of what humanity thinks about themselves (and why they deserve to be verified).
Kudos to Twitter for not running away at the thought.
The Problem With Snapchat’s Coverage of the Terror in Nice
This article originally appeared on Slate.fr, Slate’s French sister site.
It’s a question that returns with every terrorist act or major accident: Is it necessary to spread shocking images of these tragedies? In recent months, several arguments have broken out around the circulation of violent images, especially since the introduction of apps like Periscope and Facebook Live, which aim to broadcast events live.
With the attack in Nice, France, where a man killed at least 84 people with the help of a truck, it’s news channels and certain accounts on Twitter and Facebook that most internet users have their eyes on. France 2 was criticized for having interviewed a husband in front of his wife’s body, and Wikileaks for circulating shocking amateur videos. But another platform also decided to show fly-on-the-wall content by internet users: Snapchat.
On this app intended to exchange and publish ephemeral videos and photos, there are “live stories” managed by the company. In aggregating videos and photos published by users present at a precise moment and place, the app can construct the story of a particular event. For instance, there has long existed a Paris story, which tells the story of everyday life in the capital city thanks to snaps sent by people geolocated onsite.
Most of the time, live stories are joyous events. For instance, the last Paris story shows the Bastille Day fireworks. But in the hours following the attack, Snapchat published a “live story” on the Nice attack with very strong, even quite shocking, images, especially because they’re shot from shoulder height in the crowd. It’s a transcript of what people are seeing and hearing right this minute, and several seconds of video captured live.
In a first video, after basic information about the situation added by the app, we see a rush of the crowd, panicked, just alongside the Promenade aux Anglais. We hear a man, maybe the one who’s filming, say to his young child, who’s in tears, “Don’t cry.” The video cuts out, Snapchat shows a message warning us that the following images will be “graphiques,” an Anglicism that informs us of the violence of the collected snaps.
After that, the snaps that are less than 10 seconds long come one after another. We see people running as far as possible from the “Prom,” their faces still frightened. We hear users say that they’re hearing gunshots, or another cries out, panicked, “Fuck, what’s going on, what’s going on now? Fuck, fuck, what’s going on what’s going on? Why is everyone running?”
Yet again, we pass from one terrified person to another. After a shot of the firefighters hurtling down a street, a young woman speaks directly to the camera: “Abby and I are fine. Frankly it’s fucking crazy, there’s a truck that burst in and everything, and it actually crushed the people.” Under numerous videos, Snapchat continues to add information, quotations from the president of the region, Christian Estrosi, or measures taken by François Hollande. In another surrealist snap, we see a member of the police anti-terrorism unit asking for news on what’s happening on a square near the promenade.
The unspooling of the story finishes on the most recent news available. Except instead of talking about the “Islamist terrorist threat” in quoting the remarks of François Hollande, Snapchat falls victim to the worst malapropism and writes “Islamic terrorist threat.”
The discomfort is just as strong when you look at the more discreet feature called Explorer. In the middle of the “live story,” you can swipe up on the screen to discover the story from other angles thanks to other videos sent by users. Except that, contrary to classic stories, this part isn’t curated by humans but by robots that choose which snaps that will appear in Explorer. So we see a photo with an upside-down smiley face emoji, and another using the “Promenade des Anglais” filter, usually used for touristic snapshots and to share one’s whereabouts, but this time taken just after the attack. That these two users added these two symbols qualifies as bad taste, but that Snapchat lets algorithms direct this kind of tragedy is problematic.
This story, far removed from the others, exists because it fits into Snapchat’s strategy of wanting to become a news medium. Each day, the app counts 10 billion views on its stories and 100 million users from around the world documenting their life each day. The company told us via email that 1 billion snaps are posted each day, both public and not. So it’s easy to understand that there’s an incredible informative potential. In an article on March 31, Fortune wrote that the app wants to dominate the media: “If you are a media company of any kind, Snapchat is becoming a potentially powerful partner—but also a potentially powerful competitor as well.”
So it makes sense to see stories dedicated to events with worldwide repercussions. During the shooting in Dallas, the dedicated story warned viewers before showing them at least one sniper from far away and other videos where gunshots were heard. In France, the app hadn’t yet covered tragic events in this manner. During the attacks in Paris, the app displayed snaps by Parisians and tourists showing their support for the victims, not violent content like it did at Nice.
But when you know what clumsy errors appeared in this story, and that the public capable of viewing these violent images is predominantly adolescent, you can ask questions about the company’s work in this matter.
When we contacted a spokesperson for Snapchat, he confirmed that the app launched its curation of snaps geolocated in Nice as soon as the attack took place (a proposition that hadn’t existed until then). According to Snapchat, the general principle of live stories is to show events from hundreds of different perspectives and angles.
For events like the Euros, it’s easy to understand this kind of reasoning. But for tragedies like the attack in Nice, you might wonder if the app’s audience really wants to live through the terror from all the angles and from shoulder height, which immerses us in the tragedy more easily than images on TV. Even if the user chooses to open this story with full knowledge of the facts. Especially since the story is accessible throughout the country. While Twitter also has a curation of tragic videos, Snapchat goes further than Facebook, since it suggests these images to its users. By the way, on Twitter, numerous people showed their anger after falling into this story.
When we asked questions about the people responsible for this very sensitive news content, Snapchat refused to give us specific details on the composition of the team on duty in London, but they reminded us that it’s Peter Hamby, formerly a journalist at CNN, who’s the head of news there.
With events like this capable of repeating from one day to the next, the role of Snapchat in the news cycle will explode as it seduces more and more people. Is it necessary, as certain television channels did, to show shocking images? Will warning users with a simple message prevent the young audience from seeing these images? Snapchat will have to answer these questions if it wants to make a place for itself in the media world. And this kind of story asks questions of us, too, because we film this violence, and we send it voluntarily to Snapchat. Meanwhile, on the app, the threat hovering over France is still “Islamic.”
Subtweeting Looks Terrible on You. (You Know Who You Are.)
Back in the olden times of the mid-to-late 20th century, children in schools and summer camps used to gather in a circle and play a game called telephone. The rules were simple: One kid came up with a phrase and whispered it to the next person in the circle, then that kid whispered it to the next, continuing on until a completely jumbled version of the phrase emerged from the last kid’s mouth. It was almost guaranteed that the sentence would be at the very least inverted, if not completely and utterly unrecognizable. And it always ended with the child who started the game saying, “That’s not what I said!”
In 2016, we don’t play telephone. We subtweet.
Six months after Twitter attempted to trademark the word subtweet, the digital security company McAfee published a blog post titled “Does Your Teen Recognize the Cruelty of Subtweeting?” that described subtweeting as “one of the most stealth and (dare we say?) ‘socially acceptable’ ways to cyber bully.” That same week, the Washington Post published a story positing that people who subtweet are terrible. And let’s not forget that the NCAA literally decided that coaches can’t subtweet after it was discovered that some coaches were using it “as a way to mention or endorse recruits without violating NCAA rules.”
If this doesn’t mean anything to you, here’s a quick primer in subtweeting.
On the most basic level, a tweet is either your own (“Donald Trump is a Cheeto!”) or a response to someone else’s tweet (“@realDonaldTrump you’re a cheeto!”) When you tweet a response, you’re @-ing someone, and they get a notification or will see it in their Twitter feed. So if you don’t include their name, but you’re responding to something they’ve said (“Donald Trump is a Cheeto”) you’re removing the correlation between their tweet and yours and ‘subtweeting’ them. As a result, subtweeting almost always has a negative connotation. It’s the virtual equivalent of talking behind someone’s back.
Over time, however, the definition of subtweeting has broadened to encompass a whole swath of offline behavior, addressing actions others have taken offline and online without directly naming the person responsible. It’s not just relegated to the normal teens-and-tweens crowd, either. Celebrities dish it out, politicians throw shade, and sports teams don’t hold back. Subtweeting now encompasses an entire lexicon of inherent blame that lacks a pointed finger.
All that being said, the most complicated part of a subtweet is the motivation behind it. As a tweet is public, there’s always a chance that the subject of the smack talk will see what you’ve said about them or read deeply into something you’ve said about someone else and think it’s about themselves. You can’t control how that person reads the tweet, so that’s where the telephone game comes into play: Unless you outwardly direct what you’re saying at a specific person, there’s a very high risk that the tweet will be misinterpreted or elicit an explicit “that’s not what I said!” The more convoluted and passive aggressive your message is, the less clarity there is in interpreting the intended target.
And truthfully, sometimes an accidental subtweet can be just as bad—some of the most banal tweets can have the most frustrating consequences. Sure, you can read up on how to subtweet well, or how to avoid subtweeting, but just because you know who you are talking about (or who you’re not talking about, for that matter) doesn’t mean that everyone else does. And it certainly doesn’t mean that everyone knows how to read a subtweet well.
The good news is that not everyone gets as much engagement on their smacktalk as LeBron James does, and therefore the likelihood of your tweet reaching its intended target isn’t terribly high:
It's ok to know you've made a mistake. Cause we all do at times. Just be ready to live with whatever that comes with it and be with.....— LeBron James (@KingJames) March 1, 2016
That being said, people have adapted and put feelers out for potential negative critique:
I added a subtweet detector to Tweetdeck. pic.twitter.com/K6ueoHapKi— Farhad Manjoo (@fmanjoo) April 14, 2015
Or, quite straightforwardly, just acknowledged the presence of the subtweet:
This was a subtweet.— Jacob Brogan (@Jacob_Brogan) February 22, 2016
Ultimately, Twitter is a place where everything is said and nothing is held back, and the more drama you can bring to yourself, the more followers and favorites you acquire. In this context, it’s easy to see how a scorned lover or angry co-worker would find some relief in expressing their deepest darkest frustration without a verbal confrontation. But it also creates a culture not dissimilar to the 24-hour news cycle and pushes people to parade the darkest side of themselves in order to garner approval from others. And it simultaneously creates a subset of people who are perpetually afraid that what they’re reading is about them.
It’s also worth mentioning that people who subtweet have a tendency to come off as pretty gross and petty. The Washington Post details a study that suggests that those who confront their problems head-on in a tweet that references someone directly have a lot more favorability among their peers. In contrast, those who don’t engage on a confrontational basis are looked at unfavorably. But it’s worth mentioning that overall it was clear that people didn’t always understand what was happening in the tweets that they were reading, and some of the negativity stemmed from the implication that it’s rude to be purposefully obtuse.
Truthfully, the more you subtweet, the more you actually become part of that disgusting grind of human unhappiness and inability to face overarching issues. And the more you might have to work on your attitude:
when you want to subtweet but you're working on your attitude pic.twitter.com/5hNKYe6Caj— Ellen DeGeneres (@EllenReaction) July 10, 2016
There is good news, though. There’s an easy solution to all this strife: You could stop subtweeting. And who knows, maybe that’s a subtweet right there.
FCC Votes Unanimously to Support 5G Wireless
As planned, the Federal Communications Commission voted Thursday on a proposal to make new portions of the radio spectrum available for 5G wireless service. The commission approved the initiative unanimously in a moment of bipartisan agreement.
Though 5G will probably not be widely available to consumers until at least 2020, the FCC’s decision will spur progress by allowing access to 11 GHz of high-frequency spectrum. And ubiquitous 5G access could change how people access broadband overall in the United States. FCC Chairman Tom Wheeler said, “This is a big day for our nation.”
Verizon, AT&T, Sprint, and T-Mobile are all in various phases of developing 5G technology and, for better or worse, the FCC is banking on this competition to drive 5G expansion. Wheeler said in a statement, “We are setting flexible rules that will allow the market to best determine how the technology will evolve, without having to ask our permission.” What exactly will develop is hard to say—the cable internet industry certainly has its flaws—but at least telecoms won't be able to complain about overbearing regulations ... for now.
Other countries like Japan and South Korea are also planning to roll out 5G coverage in the next few years. It’s hard to assess what the true impact of the technology will be, though, with so much unbridled excitement floating around. FCC Commissioner Mignon Clyburn said in a statement, “Indeed, there is seemingly no limit on how what we refer to as 5G could impact our everyday existence.”
Drones and M&M’s Help Vaccinate Endangered Ferrets
The U.S. Fish and Wildlife Service just announced a plan to save wild ferrets in the Great Plains region using brilliant combination of drones and M&M candy. Endangered since 1967 and thought to be extinct twice, the black-footed ferret is one the rarest mammals in North America—just 300 are thought to live in the wild. According to the FWS, the “primary obstacle” to the recovery of this species is its susceptibility to a virus called the Sylvatic plague, similar to the bubonic plague in humans.
That’s where the drones come in.
To protect the ferrets from the plague, they need to be vaccinated. But tracking down wild animals is tough, which is why the FWS has partnered with private contractors to develop a vaccination delivery system, in which Unmanned Aerial Systems (more commonly called drones) will fly above the ferrets’ territory in northeastern Montana and drop M&M candies coated with the vaccine in the area.