Joshua Coon
Christian. Student. Photographer.
For years, YouTube has fostered a culture that has encouraged openness and expression. For the most part, they have done an admirable job of carefully keeping the site and community in good order. In December 2007, YouTube allowed for their creators to monetize their content through ads featured in the beginning of videos and banners on the site. [1] This lead to an entirely new ecosystem of creators who depended solely on YouTube ad revenue generated by their videos as their livelihood. Recently, though, in late March/early April, YouTube has significantly hindered their creators by releasing new community policies that took a turn for the worst. In writing, YouTube's policies added "'sexually suggestive,' 'sensational and shocking,' and 'profanity and rough language,'" as qualifiers for excluding videos from generating ad revenue. [2] In theory, this is obviously a good thing. Videos that are explicit or that feature illegal activities should not be allowed. But the lack of clarity in these new policies have lead to a discrimination amongst it's community of creators. For example, creators that express conservative and right-leaning political opinions have lost a significant amount of ad revenue, even while their viewership has stayed the same. No matter where you stand politically, this issue is bigger than just an opinion. This issue deals with the very principle that free people have the right to speak freely, no matter how contrary their views may be. As another example, LGBT creators have seen a dip in their ad revenue solely due to how vocal they are about their sexuality. Again, wherever you stand on these issues and ideologies, the principle that all people have the right to free speech must be upheld and protected. As a public community that is at the heart of pop culture, YouTube has the moral obligation to stop and proactively prevent this kind of discrimination. To solve this problem, YouTube should either revert back to their old, tried-and-true policies, or simply correct and implement more specific and explicit policies that are devoid from any sort of discrimination. Last week, I blogged about the data from a Fitbit being used to convict a man, who allegedly shot his wife in the back, of murder. In summary, the man, Richard Dabate, shot his wife twice from behind and then fabricated a break-in story in which he claimed “a ‘tall, obese man’ with a deep voice like actor Vin Diesel’s and wearing ‘camouflage and a mask,’” [1] assaulted him and then killed his wife. Along with other evidence, the data from Mrs. Dabate’s Fitbit was enough to convict.
Relating to Fitbit, in an article published today (May 5th, 2017), “Apple topples Fitbit as the world's top wearable company.” [2] This article peaked my interest because of Apple’s history with not disclosing the data stored within its devices. For instance, Apple refused to meet the FBI’s demands to unlock the iPhones of the terrorists who killed 14 people in San Bernardino, CA. [3] After the terrorist attack, the Federal Bureau of Investigation began searching for ways to unlock the killer’s phones. A federal judge ordered Apple to unlock the phones, but Apple refused. Ultimately, the FBI managed to crack the iPhone’s security measures within months. Back to the more recent Fitbit case, consider the difference in case outcomes. Although not as heavily encrypted, we didn’t hear any resistance from Fitbit regarding law enforcement extracting user data from one of their devices. This raises the question as to what role companies must play in criminal cases involving their devices. Honestly, in this blog, it’s hard for me to come to a conclusion as to what approach companies should take: the Fitbit approach, or the Apple approach. Apple took the stance based on strong principles of a utilitarianist approach, where Fitbit took a justice approach. But the contrast between the two cases doesn’t mean one is better than the other. Maybe each case was addressed appropriately due to their differences in ethical approaches. Whether justice or utilitarian, consequential or non-consequential, the necessity to discuss these matters is evident. More and more, our society is facing bigger and bigger problems regarding ethics in tech. As critical thinkers and contributors to society, we must keep a close watch on the various ethical developments that continue to pop up in our society. 4/28/2017 4 Comments [13] Fitbit used to solve murderOn December 23rd, 2015, two days before Christmas, in the small and quaint town of Ellington, Connecticut, Connie Dabate was murdered in her own home. She was shot twice, once in the back and once in the head. The murderer, reported to be “a ‘tall, obese man’ with a deep voice like actor Vin Diesel’s and wearing ‘camouflage and a mask,’” [1] could not be located by police. Why? Because this large, mask wearing man did not exist. Connie, a mother of two, was in fact, murdered by her husband, Richard Dabate. Mr. Dabate claimed that he watched this tall and obese man shoot his wife after having fought and then subsequently been tied up by the alleged perpetrator. After the purported home invader had left the scene, police arrived to find Mr. Dabate “with an arm and leg bound by zip ties to a chair in the kitchen at the crime scene.” The weapon that he said the large man used to kill his wife was Mr. Dabate’s own .357 handgun that he acquired two months prior. Mr. Dabate showed “superficial knife wounds” and claimed of a struggle between him and the man, which according to investigators, the “physical evidence showed no sign of the struggle described by Mr Dabate.” Lastly, a highly trained police dog indicated no scent of an intruder. To top this all off, Mr. Dabate “was in a relationship with another woman, who was expecting his baby.” Arguably, Richard Dabate is a perfect example of a horrible human being. Although incredibly tragic and disheartening, all of these thrilling details are not the reason why this case is making national news. Rather, it is the fact that the most substantial piece of evidence was not the aforementioned evidence. Instead, it was Mrs. Dabate’s Fitbit that broke the case. Once the Fitbit’s data was collected and analyzed, in showed “that she had walked 1,217 feet around the house during the time her husband said they were being attacked.” [2] Additionally, “the Fitbit showed her last living movement was at 10:05 a.m.,” which was over an hour past when Mr. Dabate claimed the ‘intruder’ shot and killed his wife. Currently, as of today (April 28th), Richard Dabate is currently under trial for the charge of “murder, tampering with evidence, and providing false statements,” and he will most assuredly, with the evidence at hand, will be found guilty. Without the data received from the Fitbit, the case might have grown cold. Although Mr. Dabates alibi was faulty at minimum, with his money and resources, there would of been a high likelihood of him walking free. The collection of the personal and sensitive data that’s stored on a device such as a Fitbit in a trail like this sheds some light on how we might use such data in future crimes. Our data can point to the most sensitive details about our lives, yet in this instance, was likely the decisive evidence for justice. 4/22/2017 5 Comments [12] Google’s War on Bad AdsGoogle, the largest advertising company on earth, is reported to soon start blocking ads within its own browser, Chrome. Specifically, ads that don’t “that don’t comply with the Coalition for Better Ads list of standards.” [1] This standard aims to prevent invasive ads from becoming a mainstay by defining what ads are “least preferred,” “based on comprehensive research involving more than 25,000 consumers.” [2] Google’s motivation for implementing new standards in online advertising and for blocking the ads that don’t meet their criteria is so that they can boost engagement of properly sanctioned ads. Currently, “as 26% of desktop users have some sort of software to hide advertisements.” [1] Google wants to get that number down to zero, and rightfully so. The content that we love to read and watch wouldn’t be on the internet if it wasn’t for the money that’s generated by ads. Additionally, Google itself wouldn’t be what it is today without its ad revenue (which makes up the majority of its income. Although these strides in internet advertising possibly could be beneficial, some might argue that Google is abusing the power that it has. “The online analytics firm StatCounter claims that Chrome has gobbled up a little over 52 percent of the worldwide browser market share.” [1] The company's decision to block ads that don’t qualify would significantly affect those who run unapproved ads. Many people’s incomes and livelihoods could be at stake. Also in regards to Google abusing its power, Google competes with most other ads. Since they have control over more than half of the world’s internet traffic, Google could just be one step closer to monopolizing internet advertising. In hopes that Google doesn’t abuse its global reach, maybe the company will abide by its own motto, which is “don’t be evil.” In 2015, 35,092 people died in the United States from car accidents, which is more than double that of gun deaths (13,286 in 2015). This staggering number of car deaths in the United States is a public safety epidemic. I apologize for beginning this blog post with a downer, but these statistics set the stage for a much brighter future in vehicle safety. In recent years, the emergence of self-driving cars has offered a glimmer of hope at solving this public safety epidemic. At the onset, self-driving cars could only be tested on closed courses due to their unpredictable and unnerving nature. Overtime, with an increasing amount of data to analyze, computer vision and object tracking algorithms have greatly improved. Some self-driving cars have now graduated to real world testing and even production cars (e.g. Tesla). The most exciting part about production self-driving cars is not that you could eventually take naps on your way to work, but that your likelihood of getting into an accident on your way will be significantly decreased. Although not perfect and large strides still need to be made in self-driving technology, the potential is staggering. In this real-world video, customers in Russia documented their Tesla’s ability to see past the car in front using its radar and then begin it’s collision avoidance before any other car could. Notice that the car’s collision alarm goes off before the accident even occurred.
In a more recent development of this technology, a start-up company called Luminar has demonstrated their Light Detection and Ranging (LiDAR) system, which they built from the ground up. Their version of LiDAR is a big deal because it can detect objects in stunning detail from up to 100 meters away. For instance, it can detect “a bicyclist weaving in and out of the road, at 100 meters and further away; a small pigeon that suddenly scurried about 40 meters in front of their car; and they clearly showed the human form of the mannequins, even those dressed in dark garb; as well as the black-painted canvas at the end of the pier.” In contrast, “data streaming in from other LiDAR makers would only show a few dots or a meager line indicating that an object lay ahead. Other systems could not specify objects or even detect dark walls and the mannequins dressed in darker clothes.” As monumental improvements to self-driving cars continue to unfold, Americans must start to consider the (near) future of vehicular transportation. Will it soon be our ethical responsibility to adopt the safest alternative to people-driven cars? How many more years must we go with a death toll that’s in the tens of thousands? Can self-driving cars put an end to vehicle related deaths? 4/8/2017 0 Comments [10] Negating Net NeutralityNet neutrality should be important to all of us, but what is it and why? Net Neutrality is “the principle that individuals should be free to access all content and applications equally, regardless of the source, without Internet service providers discriminating against specific online services or websites.” [1] It is essentially legislative or legal action that prevents Internet Service Providers (ISPs), like Comcast and AT&T, from throttling internet connections based on how they are being used. For instance, without net neutrality, ISPs would be able to slow down and even charge online services like Netflix for their use of bandwidth by the ISPs customers. It’s somewhat of a catch-22. Customers pay an ISP to have fast internet service to things on the internet, but depending on what they’re accessing, will be penalized for it. Internet Service Providers argue that it would allow for them to provide faster speeds for internet and a better experience to customers who aren’t bandwidth heavy, like regular web browsing and not video streaming. In the most recent net neutrality development, Federal Communications Commission Chairman, Ajit Pai, is motioning to transition net neutrality from being under the FCC’s supervision and guidance, to being under the Federal Trade Commission’s oversight. The controversy here is sourced in how either Commission handles governing. You see, the FCC has the authority to write the rules. It can write rules that determine how the ISPs do business, while the FTC only has the authority and ability to open lawsuits against unruly corporations. If this transition comes to pass, the only thing stopping ISPs from trashing net neutrality is their individual terms of services. With a common good approach to this issue, in this situation, the right thing to do is clear. The FCC should not surrender its authority, since it has the potential to do the most good. Due to its resources and legal authority, the FTC is unable to properly regulate such a significant issue, such as net neutrality. It is still unclear how this transition of power would occur. In hopes that the internet will only become a more free place for expression and business, the tech community eagerly awaits the decisions of the powers that be. 3/31/2017 0 Comments 09 Verizon dabbles with big dataOn March 30th, 2017, the Electronic Frontier Foundation, or EFF, posted an article divulging information on a Verizon Wireless initiative to collect personal information from all of their customers who use Android phones. This post has since been drastically modified and revised after a swift response from Verizon attempted to minimize damage done to their public opinion. If you are not aware, the EFF “is the leading nonprofit organization defending civil liberties in the digital world.[1]” They investigate software and digital technologies around the globe to ensure a consumers may enjoy privacy, expressive freedoms, and creativity. In the report, the EFF revealed that Verizon is testing an Android application for all Android phones activated on their network. Once ready for release, the software would be pushed to millions of devices via an over-the-air update. So, what is this application? Why is it causing such a stir? The app, called “AppFlash,” is a software program that is actually an “app launcher and web search utility” that will allow Verizon to “be able to sell ads to you across the Internet based on things like which bank you use and whether you’ve downloaded a fertility app.” AppFlash would be able to collect pretty much all of the data on your phone: “cell number, device type, operating system and the apps or services that you use…” as well as, “everything installed on your device, your location and the contact details of everyone in your phonebook.[2]” The reasoning for this massive overreach in data collection is in the interest of advertising strictly within Verizon owned companies. Verizon claims that it will, “provide more relevant advertising within the AppFlash experiences and in other places.” “Other places” is not very well described, which could be a pandora's box in regards to where customer information might be spread across the internet. Additionally, concern over AppFlash’s security have surfaced. The app could potentially infect millions of devices with security holes for hackers to exploit within the app. As an example, due to the app soon being installed on millions of devices with sensitive and personal data, hackers will most certainly be probing AppFlash for exploits to manipulate for nefarious reasons. In the end, Verizon’s intent to collect unprecedented amounts of private user data points to a much larger issue. Companies are becoming increasingly more bold in invading their customers data. Most shockingly, the vast majority of Verizon users will never even know. The poses an ethical issue that must be discussed amongst lawmakers, the tech industry, and most importantly, the general public. The ethical grey-area of ‘big data’ is still a young issue and one that must be proactively ironed out. People must engage in the debate and have their voices heard. That is the only way for fair and beneficial progress to be made to ensure a safe, growing, and healthy society. 3/17/2017 2 Comments Finding a FraudsterA judge in the quiet state of Minnesota, has recently made waves in the California tech community after requesting a large sum of data from Google in an effort to find information in a fraud investigation. The judge issued a warrant subpoenaing Google for the names, email addresses, social security numbers, payment information, account data, and IP addresses of the entire town of Edina from December 1, 2016 to January 7, 2017. The fraud case that is causing this stur is over $28,500 that was thought to be requested from a Spire Credit Union customer, but happened to be from the identity thief that is still at large. Investigators did a Google search of the victim's name which resulted in the discovery of a fake passport using the victim’s name. In order to find the perpetrator, Edina police resorted to requesting Google to divulge the search information of thousands of people from the city in order to uncover who might have been searching the victim’s name prior to the fraud. Google has since denied the outlandish subpoena from the Hennepin County judge and has responded in saying, “we will always push back when we receive excessively broad requests for data about our users.” This case brings into question the ethics of such broad data collection and Google’s decision to fight the court order. If Google obliged to the warrant, they would be handing over the sensitive information of roughly 50,000 of its users. By rejecting the order, Google could be obstructing justice (depending on who you ask). Since this is not a life-or-death matter (or a crime of much proportion in the grand scheme of things), I side with Google’s decision to withhold information from investigators. In the greater interest of the company, its millions of users, and the precedent that would be set if this warrant is upheld, I side with Google’s choice to withhold the information and condemn the Hennepin County judge for his or her blatant violation of the United States’ 4th amendment to the Constitution. With or without a law degree, it’s hard to misinterpret, “[t]he right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.” As this development between Hennepin County and Google continues to unfold, I am curious to see its result. In the end, I hope that the fraudster is caught and that the sensitive information of thousands of Edina residents is kept secure from overreach of any amount. 3/10/2017 1 Comment Uber's UpheavalAccording to Wikipedia, ‘grey hat’ hacking “refers to a computer hacker or computer security expert who may sometimes violate laws or typical ethical standards, but does not have the malicious intent typical of a black hat hacker.” Interestingly, Uber’s ‘Greyball’ program does just what Wikipedia describes: violating laws and ethical standards, but not in such a grose was as would black hat hackers. In an article published by The New York Times, Uber has been caught actively deceiving law enforcement officials by displaying ‘ghost cars’ on devices used to expose Uber drivers in places where the service is outlawed. Although mostly used outside of the United States, ‘Greyball’ is suspected of being used in cities like Portland, OR, and Las Vegas, NV, where the ridesharing services are banned. An Uber spokesperson said, “this program [greyball] denies ride requests to users who are violating our terms of service — whether that’s people aiming to physically harm drivers, competitors looking to disrupt our operations, or opponents who collude with officials on secret ‘stings’ meant to entrap drivers.” As previously mentioned, more than just denying ride requests to law enforcement officials, Uber’s ‘Greyball’ as reportedly been displaying ‘ghost cars’ to known code enforcing officials. This is blatant obstruction of justice and a complete disregard of any ethical standard. More than just the unethical and illegal use of ‘Greyball,’ Uber has been under heat for reports of a misogynistic corporate environment, racial profiling of blacks and increased rates for women, and video of Uber’s CEO losing his cool during a discussion with an employee. To add salt to the company’s wounds, their vice president of product and growth, Ed Baker, has just stepped down. All of this negative PR has caused a significant decrease in the use of Uber’s services. ‘Greyball’ is simply the icing on the cake (or the nail in the coffin) for the $50 billion startup. The whole development is a prime example of how unethical management and decision making never pays off. In the end, the company has reaped what they have sewn. |
|