Beyond True and False

“FAKE NEWS,” INTELLIGENT MACHINES, AND THE WILL TO DETERMINE

Cole Hardman
46 min readMay 19, 2019
https://pixabay.com/illustrations/artificial-neural-network-ann-3501528/

EVIDENCE

…this essay examines the ways Facebook, Twitter, and other social media companies have proposed combating “fake news” with artificial intelligence. In doing so, I aim to answer what I believe to be important questions: can artificial intelligence really solve the problem of “fake news,” particularly fake political news, and if it can, what does that imply about our government?

On April 10th, 2018, Mark Zuckerberg was called to testify before the Senate Judiciary Committee and the Senate Committee on Commerce, Science and Transportation. As part of his testimony, the Senators asked Zuckerberg to address the Cambridge Analytica privacy violation scandal and the steps his company, Facebook, Inc., had recently taken to combat the presence of “fake news” on their social media website. This testimony, and the brief public glimpse it gave of the operations underway at Facebook to combat misinformation, shed light on the emerging challenges our country faces at the intersection of will, freedom, power, truth, and artificial intelligence.

The media blitz around Zuckerberg’s testimony was considerable. On April 9th, when Zuckerberg’s written testimony was released, the Verge wrote about the impending oral testimony and accompanying hearing as if it were a pro wrestling match: “The real meat of Wednesday’s hearing will come from Zuckerberg answering lawmakers’ questions about privacy, election interference, and Facebook’s future. But this testimony will set the stage for that showdown” (Robertson n.p.). While hyperbolic, the anticipation was warranted. As of the time of this writing, about a year later, the live stream recording created on YouTube by the Washington Post has amassed nearly a million views, and a quick Google search of “zuckerberg senate testimony” retrieves over millions of results. Sensing the turn of the wind when he opened the testimony, Senator Grassley stated, “we have 44 members between our two committees. That may not seem like a large group by Facebook standards, but it is significant here for a hearing in the United States Senate” (“Transcript” n.p.). In the livestream recording of the hearing, everyone laughs at Grassley’s opening address, which hints at the confrontational elephant in the room personified by the tide of photographers focused on Zuckerberg, who sits waiting calmly for the opportunity to deliver his testimony (“Mark Zuckerberg Testifies” np).

In contrast to most political events in Washington, the public obviously cared about what was happening when Zuckerberg stepped into the Capitol Building, making the inversion of power between the social government and the social media company laughably explicit. Despite the fact that he was the one summoned to speak before those in control of the law, Zuckerberg came to Washington on the attack. This moment represented a meeting of the mythological East Coast and West Coast embodiments of America — the archaic and slow-moving arm of the law and the devilishly quick algorithmic mechanizations of the tech industry — two powers that seem more than most to shape the lives of everyday citizens, and the nation was watching. Like the testimony of the former head of the FBI, James Comey, whose unique involvement in the 2016 presidential election and the events that transpired thereafter also made him into a political celebrity, Zuckerberg’s testimony was surrounded by a media firestorm that transformed the day’s events into something akin to a political Super Bowl. But unlike Comey, whose testimony was focused on the operations of the law as written and the adherence to or disrespect of those laws, the testimony that Zuckerberg provided, particularly his testimony about “fake news,” was centered on the very process of determining those laws and the government that evolves from them.

Tellingly, however, Zuckerberg’s testimony focused primarily on user privacy and not “fake news” or artificial intelligence. A search of the Washington Post transcript of the testimony reveals that the word “privacy” appears 101 times during the hearing. The phrase “artificial intelligence” appears five times, “fake news” appears four times, and “machine learning” appears only once (“Transcript” np). This disparity between the amount of time spent talking about privacy and fake news makes sense in a particularly informative way: by talking about privacy, Senators in the committees kept the ball in their court, the court of the law as written. In doing so, the Senators bought themselves the chance posture as defenders of the people while at the same time avoiding the complications of difficult-to-understand technologies, such as machine learning, that pose a greater challenge to voters’ rights. In Senator Nelson’s opening address, which was widely redistributed as a highlight of the hearing, he stated, “I hope we can get to the bottom of this. And, if Facebook and other online companies will not or cannot fix the privacy invasions, then we are going to have to — we, the Congress” (“Transcript” n.p.) Similarly, The New York Times chose to highlight the words of Senator Durbin: “I think that may be what this is all about…your right to privacy” (Wichter n.p.). Both of these statements highlight the evasive orientation of Zuckerberg’s testimony around privacy, and for this reason I argue that they are not interesting or worth considering further. Questions about privacy, independent of opinions on whether or not the laws they refer to should be altered, are similar to the questions posed to Comey during his testimony. They revolve around the law as written and can be answered by any honest lawmaker, lawyer, judge, and jury. Rather than the frequent mentions of “privacy,” then, I argue that “the meat” of the testimony, to borrow the Verge’s phrasing, is contained in those few instances in which “artificial intelligence,” “fake news,” and “machine learning” are discussed. These instances, though sparing, mark the moments when Zuckerberg’s testimony happens upon the questions that have yet to be answered, which demand to be answered by an informed public body through the process of self-government. Thus, in order to inform this public body, this essay examines the ways Facebook, Twitter, and other social media companies have proposed combating “fake news” with artificial intelligence. In doing so, I aim to answer what I believe to be important questions: can artificial intelligence really solve the problem of “fake news,” particularly fake political news, and if it can, what does that imply about our government?

My answer is centered on a contradiction that Zuckerberg does not acknowledge in his oral testimony. When Senator Grassley gives the floor to Zuckerberg, the 34-year-old CEO of the world’s most dominant social media company looks defiant, like someone who has come to reveal the truth in the presence of ignorance. “We face a number of important issues around privacy, safety, and democracy, and you will rightfully have some hard questions for me to answer,” Zuckerberg says, imposingly scanning the room for anyone who might rise to challenge the answers he plans to give (“Mark Zuckerberg Testifies” n.p.). It is clear that he still feels strongly about his company’s potential to enact positive change. He becomes slightly emotional, pausing to control his enunciation and to take a deep breath, when he talks about how Facebook was used to the benefit of March for Our Lives, the #MeToo movement, and relief efforts in the aftermath of Hurricane Harvey. But then his visage and tone change. In a way that suggests an end to the controversy at hand, Zuckerberg acknowledges his own responsibility and makes a sort of pledge:

…it’s clear now that we didn’t do enough to prevent these tools from being used for harm as well. And that goes for fake news, foreign interference in elections, and hate speech, as well as developers and data privacy. We didn’t take a broad enough view of our responsibility, and that was a big mistake. And it was my mistake, and I’m sorry…It’s not enough to just connect people, we have to make sure those connections are positive. It’s not enough to just give people a voice, we need to make sure people aren’t using it to harm other people or to spread misinformation. (“Mark Zuckerberg Testifies” n.p.)

After this apology, Zuckerberg goes on to address the problems about which he has been called to testify about, outlined under “CAMBRIDGE ANALYTICA” and “RUSSIAN ELECTION INTERFERENCE” in his written statement (Zuckerberg pg. 2 & 4). Once again, Zuckerberg assumes a pedagogical stance, re-establishing his authority over the Senate Committee as he attempts to instill public confidence in the steps his company has taken to combat future breaches of privacy and interference. He glosses through the Cambridge Analytica controversy and Facebook’s solutions, and then something interesting happens. He fails to acknowledge the section in his written testimony on Russian election interference and “fake news,” instead telling the Senators, “you can find more details on the steps we’re taking in my written statement” (“Mark Zuckerberg Testifies” n.p.).

The contradiction I want to focus on as the answer to the question I have posed — can artificial intelligence solve the problem of “fake news,” and if it can, what does that imply about our government? — can be found in this omitted section. Zuckerberg writes, “since 2016, we have improved our techniques to prevent nation states from interfering in foreign elections, and we’ve built advanced AI tools to remove fake accounts more generally” (Zuckerberg pg. 5). This is the only mention of artificial intelligence in Zuckerberg’s written testimony, and its importance, doubly understated by its omission from his oral testimony, feels suspect. A thousand questions and more erupt from that one sentence: What does it mean for one nation state to interfere with another’s election via social media? How do these advanced AI tools work? And what is the process of removing fake accounts? But none of these questions on their own provide a valid reason for this section’s absence in the oral testimony that Zuckerberg provides. Even if there are no solid answers to these questions, a sort of murky public understanding of each makes the statement legible and worth presenting. This general understanding is evident in the proliferation of articles on theses topics and the common linguistic currency that they share. Instead, I suggest that this particular section was not presented because it contradicts the claims that Zuckerberg makes in his pledge at the beginning of his testimony. “It’s not enough to just connect people,” he says, “we have to make sure those connections are positive.” But when artificial intelligence is introduced as an agential political force capable of determining the validity of our news, the idea that “it is not enough to just connect people” takes on a completely different meaning from the one that Zuckerberg presents on the surface. It becomes evident that Facebook’s business involves more than simply connecting a society of intelligent people in a “positive” way; Facebook is also in the business of connecting our society of intelligent people to a society of intelligent machines.

There are two possibilities present in the claim that artificial intelligence can combat fake political news, and they contradict and undo each other. First, if it is established that the world is populated by identifiable, essential objects, then the fake/true news dichotome holds. News can be examined in an objective way that allows an intelligent machine to determine its truth or untruth, and it is possible for intelligent machines to be effectively used in the process of identifying and removing “fake news” from media sources like Facebook and Twitter. When the news in question is inherently political, however, there is a problem with this process. Political news, per the nature of its essential reality in this understanding of the world, is news involved in determining the social government of intelligent human beings. Furthermore, in an essential world like the one assumed here, the processing of determining the self (as in self-government) is only possible for the object that is determining its own identity. That is to say that a rock cannot determine what a raindrop is and so on with many other examples. Other objects might affect the possibilities of determination, but only the object in question can make the final determination. Therefore, while an intelligent machine in an essential and objective world might be able to identify if non-political news was fake or true, it would have no ability to determine what political news was fake or true. That ability would belong solely to those whose politics were being determined. If the reality of this world I am imagining is flipped, however, the opposite is true. Assuming a world that is inessential and subjective, it is possible that intelligent machines can take part in the determination of a society that includes them and other processes that influence its appearance as a social government, because that determination involves a process of entanglement. The problem is that there is no way of determining whether or not something is true in an inessential, subjective world, where truth itself is fluid and contextual. And so, if society decides that intelligent machines were capable of determining whether political news was “fake news” or not, then it undermines the category of “fake news” altogether. On the other hand, if society decides that news exists as an essential, objectively measurable object that can be fit into the categories of fake or true, then it also decides that it is impossible for intelligent machines to participate in our collective self-determination as a government. This is the central contradiction that I want to explore.

In the following sections of this paper, I suggest that an understanding of the relationship between will and being as expressed in the concept of self-determination is essential to understanding the contradiction above. To establish this idea and to expand on the contradiction I have just identified, I look at the history of “fake news” as a category, briefly examine how intelligent machines operate, and untangle the implications of the claim that artificial intelligence can be used to control the proliferation of “fake news” on media websites. In order to do so, I relate the contradiction of essential “fake news” and capably subjective intelligent machines to a rift in the philosophy of the will that occurs between the writings of G. F. W. Hegel and Friedrich Nietzsche.

In Elements of the Philosophy of Right, Hegel situates his arguments in an essential reality that can be accessed and objectively judged, similar to the version of reality where news can be objectively fake or true that I imagined, which leads to a morality of individualism and a form of social government that enables and supports the moral right of individual identity. Nietzsche argues against this very sense of individualism in Beyond Good and Evil, writing, “What gives me the right to speak of an ‘I,’ and even of an ‘I’ as cause, and finally of an ‘I’ as cause of thought?” (§16). Going further, in On the Genealogy of Morals, Nietzsche expands on this critique of the essential, individual “I” by developing a conception of society based on the processes of power as an extension of the will that eventually undermines the commanding individual, leading to the modern world. Given the subjectivity of that modern world, it appears that intelligent machines could indeed help with a process of self determination that was based on dominance and not essential individualism. Thus, while Hegel conceives of a world where “fake news” can be identified, Nietzsche imagines one where it cannot, but where intelligent machines can exercise their will to determine the political reality as freely as we can. By examining these two diametrically opposed philosophers in relation to the problem of “fake news,” I ultimately propose that the contradiction inherent in the statements made by the social media companies responsible for maintaining integral media spaces is a rhetorical tool used to evade the resentment they develop when they become more powerful than the government of the people who use their websites. Finally, I look beyond these dichotomies toward a future when intelligent machines can indeed be used to refine the process of society’s self-determination as a social government, but with an understanding of these machine’s limitations and the ways in which they function within structures of power.

“FAKE NEWS”

Whether dissenters who damage national morale or teachers of the arts of life, the people speaking with utmost freedom clearly stand to potentially both benefit and harm the society that grants them this freedom.

While “fake news” might feel like a creation of the interconnected, computerized world of the 21st century, and while the complications brought about by the introduction of intelligent machines into our daily lives might indeed be a contemporary quandary, the problem of “fake news” has been a part of American politics since its inception. Samuel Adams, the cousin of John Adams, first rose to infamy for his particular brand of misrepresentative journalism. Writing to Alexander Hamilton about his decision to step down as President, George Washington cited “a disinclination to be longer buffitted in the public prints by a set of infamous scribblers” (“To Alexander” n.p..). And, although John Adams joined Samuel in “cooking up paragraphs, articles, occurrences, &c.” in the era before the Revolutionary War, he took the side of suppress such scribblings around the time he was estimated to have written a note about the proliferation of lies in the news in the margins of Condorcet’s treatise (Mansky n.p.) The Alien and Sedition Acts of 1798, which Adams worked with Congress to pass, “made it a crime for American citizens to ‘print, utter, or publish…any false, scandalous, and malicious writing’ about the Government” (“Alien and Sedition Acts” n.p.) Similar laws, also commonly known as the Sedition Act, were signed into law by Woodrow Wilson in 1918 as an extension of the Espionage Act in what was advertised as an attempt to maintain American morale during World War I. When, perhaps for the first time, a member of the Supreme Court established the right of free speech in a legal proceeding by dissenting to an application of the Sedition Act of 1918, that dissenting judge, Justice Louis D. Brandeis, laid the groundwork for the marketplace of ideas. “[W]hen men have realized that time has upset many fighting faiths,” he wrote, “they may come to believe even more than they believe the very foundations of their own conduct that the ultimate good desired is better reached by free trade in ideas — that the best test of truth is the power of the thought to get itself accepted in the competition of the market, and that truth is the only ground upon which their wishes safely can be carried out” (Cohen n.p.). Coincidentally, James Madison, a Democratic-Republican in direct opposition to John Adams who likely felt a great amount of pressure under the Alien and Sedition Acts of 1798, anonymously wrote something similar about the literati, or the vocal public influencers working in print, in the National Gazette: “They are the cultivators of the human mind — the manufacturers of useful knowledge — the agents of the commerce of ideas — the censors of public manners — the teachers of the arts of life and the means of happiness” (Mansky np). Whether dissenters who damage national morale or teachers of the arts of life, the people speaking with utmost freedom clearly stand to potentially both benefit and harm the society that grants them this freedom.

This problem, the problem of free speech, is perhaps what “fake news” embodies most. Hegel, when constructing his ideal, individualistic society in Elements of the Philosophy of Right, provides a solution. In Hegel’s society, the self-determined individual appears to come first. By working through a dialectic process in which the individual will oscillates between its limitless potential and its contextual restraints, the person comes to embody their identity. A person’s identity is further established by their market relationships with other individuals, both of whom exchange goods for explicitly different reasons that serve to highlight their individuality while also augmenting it with the value of the goods that are either traded or acquired. Then, on the other hand, comes marriage and love, which work in an opposite way than the individual market and transforms the individual members of the family into a foundational unity of society with a shared sense of will. These two forces — the individuating force of the market and the uniting force of love — lead to interactions that establish the existence of estates. These estates act as groups of people with a shared will that is similar to the shared familial will, though not as binding, and the government legislates based on the estates and the shared interests of the people who form them. Importantly, this legislation is an act of the will. In this way, the estates might be thought of as a form of mediated deliberation, through which people can work towards an agreeable compromise between their individuality and their shared sense of will that can then be considered by the legislative government. Through this mediated process, the legislation that is enacted connects the individual via the estates to the government, completing a mirror of the dialectic an individual undertakes when they determine their identity that acts as the foundation of that identity. Thus, the individuality of a given person is dependent on the society they inhabit, and the individual might be said to come last despite appearances of preeminence. Yet neither the government nor the individual would exist if estates did not play their important mediating role.

Hegel’s solution to the problem of free speech, represented by “fake news,” is directly related to the deliberative mediating power of the estates. He writes:

If the Estates hold their assemblies in public, they afford a great spectacle of outstanding educational value to citizens, and it is form this above all that the people can learn the true nature of their interests. As a rule, it is accepted that everyone already knows what is good for the state, and that the assembly of Estates merely discusses this knowledge. But in fact, the opposite is the case, for it is only in such assemblies that those virtues, abilities, and skills are developed which must serve as models. (§315)

For Hegel, it is the estates, and not the individual subject, that should and does speak freely. That is not to say that individuals have no right to free speech, but that they fundamentally have no understanding of what they want to say until the estates develop “those virtues, abilities, and skills” that “must serve as models” for their interactions. In this sense, “fake news” should be corrected by the deliberative qualities of the mediating estates in Hegel’s ideal society as that society wills itself into being.

Some contemporary scholars studying the “fake news” problem have arrived at similar solutions, even going as far as suggesting the necessity of a mediator. In their study titled “Social Media and Fake News in the 2016 Election,” Hunt Allcott and Matthew Gentzkow frame the “theoretical and empirical background” of the “fake news” debate by placing it in an economic context. The authors “sketch a model of media markets in which firms gather and sell signals of a true state of the world to consumers who benefit from inferring that state” and “conceptualize fake news as distorted signals uncorrelated with the truth” (Allcott 212). In Hegelian terms, this could be expressed as a market developed by an assembly of estates, considering here the necessity of two or more news outlets, where the media estates collect and sell “signals of a true state of the world” to a public that will “benefit from inferring that state” (Allcott 212). The world that Allcott and Gentzkow conceive of is also essential and objectively measurable such that the news can be determined to be objectively true or fake, but the authors suggest that fake news is developed in a novel way. According to Allcott and Gentzkow, market values determine that there cannot be perfectly true news because it is cheaper for media estates to be precise and costly for consumers to determine accuracy. Furthermore, there is no market desire for the news to be true, because consumers tend to prefer news that validates their predetermined opinions. “Fake news arises in equilibrium because it is cheaper to provide precise signals,” they write, “because consumers cannot costlessly infer accuracy, and because consumers may enjoy partisan news” (Allcott 212). In this conception of the world, “fake news” is an indispensable fact of the market. The authors stake out a way to represent the mediation of the individual via the estate through examinations of certain data by making these assumptions in their argument. But what they truly gained, and what they do not acknowledge, is a deconstructive understanding of mediation. In their argument, they develop a process of mediation that must by necessity undermine the representations mediated. They implicitly state as much when they dangle a misleading modifier at the end of the precise definition of “fake news” they provide later in their paper: “We define ‘fake news’ to be news articles that are intentionally and verifiably false, and could mislead readers” (Allcott pg. 213).

The deconstructive effect of mediation was probably what made Facebook do away with the term “fake news.” In a Slate article detailing the slipperiness of the term “fake news” written in August of 2017 and titled “Facebook has Stopped Saying ‘Fake News’,” Will Oremus documents the social media company’s attempt to distance itself from the “fake news” designator in favor of the substitute, “false news.” Oremus, after contacting Facebook, received this statement from a spokesperson over email:

The term ‘fake news’ has taken on a life of its own. False news communicates more clearly what we’re describing: information that is designed to be confused with legitimate news, and is intentionally false. (Oremus n.p.)

Facebook, it seems, was having as much trouble defining “fake news” as Allcott and Getnzkow. In a way, the trouble is understandable. The usage of “fake news” changed rapidly throughout the 2016 election and shortly thereafter. It moved quickly from a moniker of satirical news sources like The Onion, to a precise definition of news propagated by Russian agents acting as part of a misinformation effort meant to undermine Hillary Clinton’s campaign, and finally to a derogatory term used by Trump and his supporters to denote information that they simply do not agree with. With all of these changes of definition happening seemingly at once and in a particularly unstable way, it is easy to understand why Facebook might want to sidestep the “fake news” designation in attempting to keep their site secure. But what if the problem was not that the definition of “fake news” was so abused and distorted that it came to be useless, but if the nature of “fake news” itself contained the slide that unworked it? A Politico article on the history of “fake news” states the obvious: “Fake news is not a new phenomenon. It has been around since news became a concept 500 years ago with the invention of print — a lot longer, in fact, than the verified, ‘objective’ news, which emerged in force a little more than a century ago” (Soll n.p.). A Guardian article titled “What is fake news? How to spot it and what you can do to stop it” describes the “truthiness” of “fake news,” along with their clickability, as one of the main attractions to consumers, echoing the economic model outlined by Allcott and Getnzkow (Hunt n.p.). The Guardian also briefly hints at another tantalizing aspect of fake news: when it leads to real action, as when Edgar Welch infamously opened fire at the Comet Ping Pong pizzeria in Washington, D.C. at the height of the “Pizzagate” conspiracy theory. Even more disturbing, a New York Times article, “How a Fake Group on Facebook Created Real Protests,” outlines the activities of a group page that Facebook took down in part of its purge on fake accounts. In her write up of the group, Sheera Frenkel shows how fake social media accounts like those created by the Russia-backed Internet Research Agency were able to use true facts, organize real people, and amplify signals of what Allcott and Getnzkow call the “true world” in a way that allowed them to manipulate the individual wills of the populace. In Hegelian terms, activities like these could be likened to the assembly of an estate that intends to manipulate the government, not mediate political facts critical to it. The group page in question, called Black Elevation, “promoted events and coordinated activities in several cities…messaged activists and asked them to spread the word…posted videos and photographs that encouraged people to show up at protest rallies…and even advertised a job opening” (Frenkel n.p.) At one point, the group organized a rally to mark the anniversary of Michael Brown’s death, and a YouTube video of the event put online by the Memphis chapter of Black Lives Matter verified that several people attended. Finally, even more disturbing than this, a quick examination of the right-wing response to climate change research in the years leading up to the 2016 election and thereafter clearly shows that politically-charged news does not have to walk hand in hand with the truth in order to be true in and of itself. In this way, when a right-wing politician denies climate change, they at once walk away from the truth of our physical reality and reaffirm the political truth that right-wing politicians do not believe in climate change. As recently as November 21, 2018, Donald Trump dismissed climate change in a morning tweet: “Brutal and Extended Cold Blast could shatter ALL RECORDS — Whatever happened to Global Warming?” (Trump n.p.) In the face of these examples, dating back to when the news as we know it was first made possible by the invention of print, and possibly further, it seems that what is “fake” is the news itself. When the news is true, it is by definition telling a lie.

So what makes the lie acceptable and good (if not true) in comparison to “fake?” On the one hand, assuming the essential Hegelian reality I have been working in, Allcott and Getnzkow might say that the lie is acceptable when the data mediated by a news source correlates well enough with the true reality of the world that it can be considered acceptable. This version of an acceptable lie is what leads scientists to record the errors in precision they experience when making measurements with various instruments during their experiments, and what leads other scientists to accept these errors as necessary rather than as invalidating of the experiment. However, as the New York Times article about the manipulating Black Elevation Facebook Page and Trump’s recent tweets have shown, it is possible to invalidate anything and everything with a political orientation, and Nietzsche provides a way of understanding this world of alternative realities. When writing against the idea of atomism, and specifically against the individual, “monad” soul, Nietzsche ends the section twelve of Beyond Good and Evil with an interesting prediction:

To be sure, when the new psychologist puts an end to the superstition which has hitherto flourished around the soul-idea with almost tropical luxuriance, he has as it were thrust himself out into a new wilderness and a new mistrust — it may be that the older psychologists had a merrier and more comfortable time of it — : ultimately, however, he sees that, by precisely that act, he has also condemned himself to inventing the new — and, who knows? perhaps finding it. (§12)

The association of discovery and invention in this passage is tantalizing, even enigmatic, at first. These terms seem impossibly compatible, but their association with happiness, or the loss of it — “it may be that the older psychologists had a merrier and more comfortable time of it” — is telling. The merriment and comfortability that Nietzsche alludes to here are likely not valuations of some past event. He devotes the entirety of section 211 of Beyond Good and Evil to an attack on the value systems upheld by philosophers “after the noble exemplar of Kant and Hegel” in favor of “actual philosophers” who act as “commanders and lawgivers” who “say ‘thus it shall be!’” (§211). Instead, I suggest that the merriment expressed here is similar to what later philosophers working in performance theory called felicity, or the fulfillment of a promise. From this perspective, the connection of invention and the discovery of the new is obvious: in a performative sense, to find something is to invent it and vice versa. And taking note of what might have been a particularly informative Freudian slip by the translator, R. J. Hollingdale, I cannot resist entertaining the idea that the performance associated most with both inventing and finding the new in our contemporary lives is what we call the news.

Nietzsche takes the conflation of invention and discovery of the new further in On the Genealogy of Morals. In this later work, Nietzsche, as opposed to Hegel, spends more time critiquing the errors of his modern world than constructing his ideal version of society. Consequently, he writes a considerable amount about how the current circumstances of the world he inhabited came to be. In Nietzsche’s philosophy, the crux of the modern world rests on the distinction between the good/bad dichotomy and the good/evil dichotomy. Originally, he asserts, the people in the world could be divided into one of two classes: “the ‘good men’ themselves, that is, the noble, the powerful, those of high decree…” and the subjugated peoples who lead “the slave’s revolt in morality” that starts with “resentment” (Nietzsche, Genealogy of Morals §2, §10). When at first the noble and the subjugated live in a world constructed out of aggression, the subjugated have no recourse and are forced to settle with moral structure dependent on the good/bad dichotomy that determines what the nobles do is inherently good and what the subjugated do is inherently bad per the nature that placed each person at their station in society. But, “when resentment itself becomes creative and gives birth to values,” a new sort of “righteous” morality, based on “a new love” that “grew out of hatred,” develops, wherein the subjugated assume the role of the good for themselves and replace the concept of the bad with the conception of evil, which they ascribe to the nobles that lord over them (Nietzsche, Genealogy of Morals §10, §14, §8). In this way, according to Nietzsche, the nobles have been subjugated by the ignoble, who stripped the noble peoples of their ability to benefit from their inherent goodness and the powers that arrive with that lucky inheritance. Importantly for our discussion, one of those powers that the nobles previously wielded, which is significant to Nietzsche later on as he continues to undermine the modern moral system, is the power to name what is good and bad. Thus, in Nietzsche’s world, self-determination is an effect of the will that can only be achieved after victory is claimed in a battle of wills that establishes one person as lord and another as subject, granting that victorious person the noble right to will the new into being. Mirroring his earlier definition of “actual philosophers,” Nietzsche writes, “the master’s right of naming extends so far that it is permissible to look upon language itself as the expression of the power of the masters: they say ‘that is that, and that’; they affix a seal to every object and every event with a sound and thus, as it were, take ownership of it” (Nietzsche, Genealogy of Morals §2).

In the context of “fake news,” the “right of naming” has forceful implications. Instead of a world where “fake news” exists as a corruption of media sources that would otherwise responsibly provide revelations of the world that were as true as they could manage, the entire process of producing and distributing the news appears as an extension of the nobles who “affix a seal to every object and every event with a sound and thus, as it were, take ownership of it.” Nietzsche’s philosophy leaves no room for truth, especially not a political truth organized around logical values and a reasonable understanding of the natural world, and in the place of truth, he gives us an understanding of society wherein the powerful compete for the right to name things as good. In Nietzsche’s conception of the world, these powerful nobles have been undone, but I suggest that it is the resentful undoing of their individual powers that leads them to organize people into estates that act as apparatuses allowing them to fulfill their wills. Such an understanding of history finds a middle ground through the combination of Hegel and Nietzsche’s philosophies. Still, the concept of a media estate that does not help determine a self-governed society but rather subjects the individuals in its reach to a society that has been determined by an individual or a group of individuals in control of that estate does not resolve the contradiction posed at the start of this paper. Instead, a media estate enacting a will that shapes social reality, compared to a media estate that makes it possible for reality to be considered in the process of deliberation that shapes a social will, is an estate that operates in a subjective world and only cares about the truth that it actively constructs. Intelligent machines would be able to help construct truths in this conception of reality, but they would not be able to designate these constructions, not even their own, as true or “fake.” There would be no objective truth for them to measure and no way of establishing a signal of reality as true or “fake.” Instead, all news would be understood as inherently “fake,” and there would be no need for a designator like “fake news.” There would not even be a way to correlate a necessarily incorrect mediation of reality as in the work of Allcott and Gentzkow, which is only possible if some sort of underlying and true version of reality that can be accessed and measured is assumed. Instead, in a Nietzschean world like the one I have explored in this paper, even in one with estates, the truth of a society’s political reality would be dependent on the signals that society accepts as noble and worth consideration when it is determined.

Taking the Hegel and Nietzsche views together is one way of looking at the problem of “fake news” that social media companies like Facebook and Twitter claim they can use intelligent machines to solve. In the next section, I look at the specific instances of people claiming that intelligent machines can be used to solve the “fake news” problem, provide an overview of how these intelligent machines work, and show that intelligent machines are not capable of performing the tasks they are being asked to do. In doing so, I show how the assumptions made by people who make claims about the potential of intelligent machines to solve the “fake news” problem fall to either the Hegelian or Nietzschean side of the contradiction I have outlined. I hope that, by working through an examination of the potential for intelligent machines to solve the “fake news” problem in this way, I can move toward an understanding of precisely what social media companies expect their intelligent machines to do, even if it is understood that they cannot possibly determine whether or not political news is true or fake.

“FAKE” INTELLIGENT MACHINES

Dhruv Ghulati, the founder of Factmata, a startup that works with Twitter, sums it up perfectly: “The risk is that you try to get the perfect definition of fake news and never reach an answer. The important thing is to build something.”

In a string of tweets posted on March 1, 2018, Twitter CEO Jack Dorsey committed “to help increase the collective health, openness, and civility of public conversation” on his website (Dorsey np.). Dorsey tweeted, “Recently we were asked a simple question: could we measure the “health” of conversation on Twitter?” and went on to discuss a “holistic” approach to solving undesirable media problems (Dorsey np.). Dorsey wrote that “if you want to improve something, you have to be able to measure it” and compared Twitter to the human body: “The human body has a number of indicators of overall health, some very simple, like internal temperature. We know how to measure it, and we know some methods to bring it back in balance” (Dorsey np.). Comparing the social body to a physical human body is a move that dates back to Plato’s Republic, and the comparison works as a metaphor to help explain the steps Twitter is taking to solve its media problems in a way that seems objective. Dorsey himself admits that, in the past, the steps Twitter has taken to remove content have been met with accusations of “apathy, censorship, political bias, and optimizing for our business and share price instead of the concerns of society” that he clearly wants to avoid (Dorsey np.). Notably, framed in the Hegelian sense of reality I have been exploring, the strategy Dorsey proposes for avoiding these accusations rests on the ability of intelligent machines to measure the health of Twitter and to correct it in an accordingly objective way. Similarly, when Zuckerberg answered Senator John Thune’s questions about artificial intelligence (A.I.) during his hearing, he provided an example of when this holistic approach might work. “Today, as we sit here,” Zuckerberg stated, “99 percent of the ISIS and Al Qaida content that we take down on Facebook, our A.I. systems flag before any human sees it. So that’s a success in terms of rolling out A.I. tools that can proactively police and enforce safety across the community” (“Transcript” n.p.).

But in that same explanation of artificial intelligence usages, Zuckerberg hits a snag:

Some problems lend themselves more easily to A.I. solutions than others. So hate speech is one of the hardest, because determining if something is hate speech is very linguistically nuanced, right? It’s — you need to understand, you know, what is a slur and what — whether something is hateful not just in English, but the majority of people on Facebook use it in languages that are different across the world. (“Transcript” n.p.)

The concept of hate speech that Zuckerberg attempts to tackle stretches, in the framework of this paper, the Hegelian assumptions that make intelligent machines successful. Zuckerberg associates this stretch with the subtleties of language, implying that further technological progress will eventually be able to overcome these problems and learn to objectively determine whether speech qualifies as hate speech in the same way it currently qualifies certain medias as terrorist propaganda. Yet, the trouble with Zuckerberg’s “linguistic nuance” reasoning is that it assumes a world where the meaning of language is static, even if it is difficult to understand, when the reality is more Nietzschean than Hegelian, and the meaning of hate speech, like the rest of language, is constantly in flux as it is continually willed into existence by the apparatuses of power that have the privilege to name what is good and what is not. The success Facebook has had in removing terrorist propaganda, far from suggesting the possibility of scrubbing all forms of unwanted media from the website, is actually undermined by Facebook’s inability to solve the problem of hate speech in the framework of the Nietzschean reality I have staked out. Dorsey’s proposition for a holistic approach to combating unwanted media is comparably undone after a bit of digging. In his string of tweets, Dorsey left a link to a company named Cortico, which he claimed had been tapped to help Twitter solve its media problems. That link led to a blog post Cortico published on their website on the same day that Dorsey wrote his string of tweets. In the blog post, claiming that their conclusions are derived from “studies done by our colleagues at the MIT Media Lab’s Laboratory for Social Machines on propagation of rumors and political tribalism during the 2016 U.S. presidential election,” Cortico writes, “as a starting point, we are developing a set of health indicators for the U.S. (with the potential to expand to other nations) aligned with four principles of a healthy public sphere” (“Measuring The Health” n.p.). The second of these indicators, which are briefly listed, is given as so: “Shared Reality: Are we using the same facts?” (“Measuring The Health” n.p.). I could not have written a more useful rhetorical question myself. Not only does this question represent the contradiction I have identified — if we were using facts, how could it be possible to use different ones? — but it simultaneously exemplifies the operational abilities of intelligent machines that social media companies have claimed can solve the “fake news” problem. Understanding how these machines work will help in developing an understanding of the slide from Hegelian success in containing unwanted media to a Nietzschean struggle over the ability to name the good.

Figure 1: a linear decision boundary created by an SVM

In a simplistic sense, intelligent machines are software tools that create decision boundaries in data. Some examples, taken from the Microsoft guide to choosing what machine learning algorithm to use in their Azure Machine Learning Studio software suite, are provided in Figure 1 and Figure 2. Figure 1 shows the results of a linear Support Vector Machine (SVM), which works by finding the straight line between two classes in a set of data where these classes are the most distinctly separate (“How to Choose” n.p.). In Figure 2, the decision boundary has been created by a non-linear neural network. Neural networks are typically what most people think of when they think of intelligent machines because of their association with robots from science fiction and their allusion to biological intelligence. However, Figure 2 shows that they are powerful because of their flexibility and not because of the false association with human biological brain functions that their name implies. A neural-network provides a chance to construct a complex, non-linear boundary between the classes of data, meaning that more complicated arrangements of classes can be distinguished, especially compared to something like the linear SVM in Figure 1. However, while the approaches to machine learning shown in Figure 1 and Figure 2 might seem incomparably different, they are both constructed using similar techniques. That is to say that both SVMs and neural networks are trained. There are three basic types of training used in machine learning, according to the Microsoft Guide: supervised, unsupervised, and reinforcement learning. In supervised training, a set of training data is pre-labeled according to what class it is said to represent, and the intelligent machine forms a decision boundary by associating the features of the training data to the class it has been told to identify it with.

Figure 2: a non-linear decision boundary created by a neural network

The authors of the Microsoft Guide sum it up as such: “Supervised learning algorithms make predictions based on a set of examples. For instance, historical stock prices can be used to hazard guesses at future prices” (“How to Choose” n.p.). Unsupervised learning, conversely, relies on the machine to determine the different classes. According to the Microsoft guide, “the goal of an unsupervised learning algorithm is to organize the data in some way or to describe its structure,” a process that “can mean grouping it into clusters or finding different ways of looking at complex data so that it appears simpler or more organized” (“How to Choose” n.p.). Functioning in the middle ground between supervised and unsupervised learning, in reinforcement learning, the intelligent machine makes its own decision about a data point and “receives a reward signal a short time later, indicating how good the decision was” (“How to Choose” n.p.). As the intelligent machine works through data in the reinforcement learning process, it maximizes the reward signal and refines its decision. Importantly, it is not the machine itself that determines the reward signal, labels the data in the supervised learning process, and more subtly, presents the data in a specific why that allows the intelligent machine to distinguish it during the unsupervised learning process. Instead, it is the person who wants to use the intelligent machine for a specific purpose that determines the machine’s success in fulfilling that purpose through the process of training.

An analogy might make things clearer. Imagine a coin pusher arcade machine. When you put the coin in top of the machine, it bounces on an array of pins that determine how it falls into the pushing tray at the bottom, where a moving wall either will or will not push money off the edge of the tray and into prize compartment where you can grab it. Now, imagine that you can organize the pins in a way that allows you to determine where the coin you put into a specific slot will fall. Machine learning is the scheme of organization that you use, and an intelligent machine is one that you have modified such that the bounces of the pins will put the coin where you intend it to go. While the machine itself seems objective, and while the data it uses also seems objective, the processes an intelligent machine enacts on data are not objective. Instead, when the training of an intelligent machine is considered, it becomes apparent that what these machines really do is enact the subjective will of the people that create them.

Extending this analogy to the use of intelligent machines in solving the “fake news” problem leads to a similar conclusion. An article titled “This Fake News Detection Algorithm Outperforms Humans,” published on The Next Web, details an “algorithm, which was developed by researchers from the University of Michigan and the University of Amsterdam,” that “uses natural language processing (NLP) to search for specific patterns or linguistic cues that indicate a particular article is fake news” (Greene n.p.). This algorithm, according to the article, outperforms people in the detection of “fake news” despite the limited training data the researchers had access too. Yet, in his description of the algorithm, author Tristan Greene writes, “first things being first, the researchers had to decide what fake news is,” undermining his own assertion (Greene n.p.). The algorithm cannot possibly be outperforming people in determining what “fake news” is, because someone had to first decide “fake news” was in order to train the machine and develop the decision boundary that allows it to make a determination. Similarly, a few months after Zuckerberg made his statements about intelligent machines in Senate hearing, Facebook’s chief A.I. scientist, Yann LeCun, was reported by Bloomberg to have walked back earlier confident statements about the role of intelligent machines in solving the company’s media issues, saying that “A.I. is part of the answer, but only part,” and hinting at the necessary inclusion of people in the process of solving the “fake news” problem.

To bring things back into the Hegel and Nietzsche framework I have placed this paper in, I suggest that the combination of these references to how intelligent machines can be used to detect “fake news” and the understanding of intelligent machines that I have outlined here show that the problem that arises when we mix intelligent machines with our political machines, which I originally intended to investigate as a singular problem arising from specific events like the 2016 election, is not actually a problem, but the common process of using intelligent machines. First, intelligent machines are built on the assumptions of a reality that resembles Hegel’s historical, positive, and essential worldview. The data the machines process is considered to be essential and measurable in such a way that they can be appropriately categorized, and the machines themselves are considered to be objective in such a way that they can measure the data without influencing it. The categorical results are therefore assumed to be objective, essential determinations of the data that would be the same regardless of the process used to determine that data, as long as the process was equally objective and essential. But then, when the intelligent machines are trained, the data is selectively presented to the machines in a way that subjectively controls how the machines will make their determinations. Consequently, even if it is true that the data is essential and the machine is objective, the ultimate use of an intelligent machine, as when it is used to decide whether or not the news is “fake,” is subjective. It turns out that while an intelligent machine and the data it operates on might be objective and essential, the process of using that machine is as subjective as the use of any other tool. Dhruv Ghulati, the founder of Factmata, a startup that works with Twitter, sums it up perfectly: “The risk is that you try to get the perfect definition of fake news and never reach an answer. The important thing is to build something” (Strickland n.p.).

So, what are social media companies building? Framing this exploration of the intersection between “fake news” and intelligent machines in the philosophies of Hegel and Nietzsche helps to answer this important question. Furthermore, my examination of the intersection of “fake news” and intelligent machines, while enlightened by the philosophies of Hegel and Nietzsche, also suggests a way in which their theories of the will can be reconciled. In the following section, I stake out these conclusions by showing how social media companies really use intelligent machines, and what this usage implies about the will and the process of determining social structures.

“FAKE NEWS,” REAL RESENTMENT

Now, say the leaders of the social media estate, these “fake news” determinations are objective, and they could be determined in no other way. But after exploring how intelligent machines work, I think that I have sufficiently shown how mythical their objectivity is. Instead, I suggest that we are entering a new age of deference, and as long as we blindly accept the processes of intelligent machines without questioning the objectivity of their operations, we have given up our ability to hold people in power responsible for their willful determinations.

Earlier in this paper, I suggested that the Nietzschean “resentment” of nobel individuals led to the formation of Hegelian estates. In this last section, I want to outline more specifically how “resentment” is related to will and being as expressed in the concept of self-determination, how it leads to an evolution of new social orders, and what these complications of Hegel and Nietzsche’s philosophies mean for the use of intelligent machines, especially their use as solutions to the “fake news” problem. I conclude by putting a new sort of social order forward for consideration, wherein social media companies like Facebook can use intelligent machines to fulfill their wills such that they can still be held accountable for the responsibilities entangled with the power to name that they exercise.

When Senator Grassley made his joke about how much attention the Zuckerberg hearing was receiving, he made the resentment created by the power differences between social media and social government clear. In that moment, the hearing became a struggle of will between the social government and social media estates. Only, in Nietzschean terms, because we live in a world of good and evil where the resentful peoples are in power, that struggle of wills was not a battle. Instead, the struggle of wills taking place at the Senate hearing, like all struggles of will in this age of resentment, was enacted as the deferral of resentment situated in the process of the Hegelian dialectic movement of an individual, self-determining will. In this understanding, the universal will of the individual, where all things are possible, is the existence of an individual in a noble world without limitation. Conversely, the contextual and limited will of an individual is contextualized and limited by the resentment their universal, noble will engenders. The dialectic movement of an individual in a world of resentment is thus not a universal recognition of context and a contextual recognition of the universal, as in Hegel, but a deferral of resentment by deference, a giving up of one’s determination to the context of a society, that ironically allows a person to escape their subjectivity to that society and enact their will. To be a subject, it seems, is to be resented, and as a result, individuals formed estates to avoid resentment. By living in deference to the resentment of a group of people that agree to see an individual will enacted, individuals in power, like Zuckerberg or Dorsey, defer the resentment they would otherwise accumulate to the estate they are in charge of, which submilimate that resentment and turn it into a satisfying product, similar to the signals of the world theorized by Allcott and Gentzkow. This satisfying product produced by an estate is nothing less than the will of the person in charge of that estate presented in such a way that is acceptable outside of the estate, and “it is only in such assemblies that those virtues, abilities, and skills are developed,” as Hegel writes. A better, more sublime production of satisfaction serves to better placate the public, and the most sublimely satisfying products are those that appear to be objective despite their necessarily subjective existence as applications of another’s will. This is why companies like Facebook and Twitter are turning to intelligent machines.

Intelligent machines are the exemplary producers of sublimely contradictory, seemingly objective satisfaction in today’s software-dependent age. This can be seen in the references to intelligent machine operations and the understanding of intelligent machines I outlined in the previous section of this paper. Turning back to Grassley’s joke, it is important to note that his stab at Facebook had ulterior meanings for Republicans at the hearing, who not only resented Facebook for its political power, but also believed that it and other social media companies were actively censoring posts aligned with their right-wing politics. Senator Ted Cruz brought up as much during the hearing: “Mr. Zuckerberg, I will say there are a great many Americans who I think are deeply concerned that that Facebook and other tech companies are engaged in a pervasive pattern of bias and political censorship” (“Transcript” n.p.). In a Hegelian framework, this assertion of censorship, however questionable, amounts to an accusation by the social government estate against the social media estate, and this accusation shows how estates themselves can fail in their task to satisfy the public and build resentment as if they were an individual. Acts like censorship, when acknowledged outright in an accusation such as the one made by Senator Cruz, reveal the estate-run world of good and evil for what it is: a world of good and bad that Nietzsche incorrectly says we have evolved from and left behind. As such, these accusations have to be addressed in order for the people in power at the heads of estates to retain that power. When Zuckerberg talks about intelligent machines at his hearing, he is doing just that. The myth of objectivity that surrounds intelligent machines allows him to use them as a rhetorical tool that both satisfies his claim to combat “fake news” and allows him to defer the resentment of the public into intelligent machines that re-establish the satisfaction that Republican accusations had previously succeeded in souring. To his credit, Zuckerberg does admit that “a lot of our A.I. systems make decisions in ways that people don’t really understand,” but he does not take his chance to briefly explain to the public how those systems work, instead telling the Senators that he will get them that information after the hearing (“Transcript” n.p.).

Intelligent machines do much more than simply defer resentment, though. They also enact the wills of people in power more quickly and subtly than any other deferring apparatus to that has been created to date, including the estate apparatuses of individuals united under people in power. Claire Wardle, who heads up a group that combats misinformation, called First Draft, claims that all social media companies are “using AI because they need to scale” (Strickland n.p.). By scale, what Wardle means is that the companies need to enact their will on more instances that occur more quickly than the amount of people who work at that company can possibly handle. Intelligent machines solve this problem because, like the pins that move a coin to a specific slot in a coin machine, the decisions they make when enacting the will of the company happen instantaneously and are prepared to happen 24/7, seven days a week, unlike the employees who have to eat and sleep and inevitably spend time away from work. The image of the flexible neural network decision boundary from the last section (Figure 2), also shows how much more subtle intelligent machines can be than other forms of pre-determined statements of determination that enacted a will in earlier times, such as a law or declaration passed by the Senate. Intelligent machines operate like the statement of a king that divides land in a far off community where the king himself cannot be present, only an intelligent machine is capable of dividing that land in highly subtle ways that align more closely with the king’s will than a simple written statement ever could. Furthermore, intelligent machines are by their very nature capable of changing to ensure that a given will is enacted in many different contexts. This contextual robustness, which is distinctly different from previous forms of orders sent by people in power, make the objective myth of intelligent machines even more difficult to dissolve. Given these reasons, it should be clear why people in power like Zuckerberg have accepted intelligent machines as their rhetorical tool par excellence for the deferral of resentment.

Thus, exploring “fake news” and intelligent machines in the frameworks of Hegel and Nietzsche’s philosophies produces a new understanding of the relationship between will, being, and self-determination that has implications for the future of using intelligent machines to determine whether the news is “fake” or not. I suggest that the social process of determining the self as government is exemplified as the Hegelian, dialectic oscillation between the contextualised and limited self to the universal self of unlimited possibility. The contextualized self is limited by an understanding of reality that imposes its exterior forces on the possibilities of action. An understanding of these exterior forces is not dependent on the essential reality of these forces themselves, although an objective measurement of them can factor into that understanding in some cases, but upon how these forces are named by the “good” people in power who exert their wills over “bad” people who are not in power. However, because we live in a world of good and evil and not good and bad, in Nietzschean terms, people who name the context that limits the contextualized self establish and maintain their power by deferring the resentment that power engenders when they exercise it. These people defer the resentment for power by establishing estates, agreeing to live in deference to the subjugated peoples who agree to enact their will, and it is only through these estates that an individual is capable of completing their dialectic trajectory from a contextualized to a universal individual and back again, determining their self as being. The estates established by people in power defer resentment by sublimating it and transforming it into a satisfying product that appears to be as objective as possible in an attempt to placate the public, who come to see the production of the estates as a natural process and not a subjective execution of an individual will. Finally, the assembly of estates, taken up to the most individual general estate, forms the basis of social government, which works to enact the will of these estates and further reinforce the wills of individual people in power who head the various estates.

In recent times, the “fake news” problem has brought the process of sublimating resentment into contrast against the 2016 elections that resulted in Trump’s questionable victory. The resentment on display at the Zuckerberg hearings and the recent accusations of censorship made by prominent right-wing policy advocates, however misconstrued, made it clear to the public that social media estates, for better or worse, have a subjective impact on our ability to determine our social government based on the media they do or do not let their users consume. To defer the resentment created by the realization of this impact, Zuckerberg and other social media leaders have taken a new step: deferring to intelligent machines. Intelligent machines have several advantages over old systems of deferring that must be taken into account. For one, their context-invariance makes them seem more objective than the old systems that have been revealed for what they are as systems of good and bad, not good and evil. Intelligent machines are also much more subtle than other apparatuses previously used to enact the will of powerful individuals while deferring resentment. They are much faster, too. In other words, the scale of their effects is generally much greater and much more acceptable to the public. With all of this in mind, the claim to use intelligent machines to solve the “fake news” problem has to be understood as a rhetorical move that defers the resentment social media companies have inspired by previously determining what media was “fake” and unworthy for consumption and what news was good and worthy using other methods that are no longer acceptable. Now, say the leaders of the social media estate, these “fake news” determinations are objective, and they could be determined in no other way. But after exploring how intelligent machines work, I think that I have sufficiently shown how mythical their objectivity is. Instead, I suggest that we are entering a new age of deference, and as long as we blindly accept the processes of intelligent machines without questioning the objectivity of their operations, we have given up our ability to hold people in power responsible for their willful determinations.

In order for the public to fairly and successfully determine its self-government, it has to be informed. Importantly for this day and age, it has to be informed about intelligent machines. The increasing speed and amounts of media shared over the internet makes it necessary to use intelligent machines to combat bad actors who wish to spread misinformation. I am not arguing for blocking the use of intelligent machines, and I believe that social media companies like Facebook and Twitter should use intelligent machines wherever they will be beneficial. However, people in power who use intelligent machines to exercise their will also have the responsibility to be transparent about how these machines are trained and operate. That the public might not be educated enough to understand the intricacies of the technology that makes these machines possible is not an excuse for keeping the public in the dark. Especially when it comes to political news, the public has the right to participate in determining what is “fake” and true. If that means first participating in creating the types of intelligent machines that are used to make “fake” and true determinations, then society should work to structure itself in a way that such participation is possible. I look to this future out of necessity. In Beyond Good and Evil, Nietzsche writes, “it seems to me more and more that the philosopher, being necessarily a man of tomorrow and the day after tomorrow, has always found himself and had to find himself in contradiction to his today: his enemy has always been the ideal of today” (§212). Trapped in the contradiction of this new today, until a transparent future orientation around intelligent machines has revealed itself, resentment is the only recourse the public has, and the ability of a society to exercise its will to determine itself as a government is in question.

Works Cited

“Alien and Sedition Acts (1798).” Our Documents, www.ourdocuments.gov/doc.php?flash=false&doc=16.

Allcott, Hunt, and Matthew Gentzkow. “Social Media and Fake News in the 2016 Election.” Journal of Economic Perspectives, vol. 31, no. 2, 2017, pp. 211–236.

Cohen, Andrew. “The Most Powerful Dissent in American History.” The Atlantic, Atlantic Media Company, 10 Aug. 2013, www.theatlantic.com/national/archive/2013/08/the-most-powerful-dissent-in-american-history/278503/.

Dorsey, Jack. “We’re Committing Twitter to Help Increase the Collective Health, Openness, and Civility of Public Conversation, and to Hold Ourselves Publicly Accountable towards Progress.” Twitter, Twitter, 1 Mar. 2018, twitter.com/jack/status/969234275420655616?lang=en.

Frenkel, Sheera. “How a Fake Group on Facebook Created Real Protests.” The New York Times, The New York Times, 14 Aug. 2018, www.nytimes.com/2018/08/14/technology/facebook-disinformation-black-elevation.html.

Greene, Tristan. “This Fake News Detection Algorithm Outperforms Humans.” The Next Web, The Next Web, 22 Aug. 2018, thenextweb.com/artificial-intelligence/2018/08/22/this-fake-news-detection-algorithm-outperforms-humans/.

Hegel, G. W. F. Elements of the Philosophy of Right. Edited by Allen W. Wood. Translated by H. B. Nisbet, Cambridge University Press, 2002.

“How to Choose Machine Learning Algorithms — Azure Machine Learning Studio.” Microsoft Docs, 17 Dec. 2017, docs.microsoft.com/en-us/azure/machine-learning/studio/algorithm-choice.

Hunt, Elle. “What Is Fake News? How to Spot It and What You Can Do to Stop It.” The Guardian, Guardian News and Media, 17 Dec. 2016, www.theguardian.com/media/2016/dec/18/what-is-fake-news-pizzagate.

Mansky, Jackie. “The Age-Old Problem of ‘Fake News.’” Smithsonian.com, Smithsonian Institution, 7 May 2018, www.smithsonianmag.com/history/age-old-problem-fake-news-180968945/.

“Mark Zuckerberg Testifies on Capitol Hill (Full Senate Hearing).” YouTube, Washington Post, 10 Apr. 2018, www.youtube.com/watch?v=6ValJMOpt7s.

“Measuring The Health of Our Public Conversations.” Cortico, 1 Mar. 2018, www.cortico.ai/blog/2018/2/29/public-sphere-health-indicators.

Nietzsche, Friedrich Wilhelm. Beyond Good and Evil. Translated by R. J. Hollingdale, Penguin Books, 2003.

Nietzsche, Friedrich Wilhelm. On the Genealogy of Morals. Translated by Michael A. Scarpitti, Penguin Books, 2013.

Oremus, Will. “Facebook Has Stopped Saying ‘Fake News.’” Slate Magazine, Slate, 8 Aug. 2017, slate.com/technology/2017/08/facebook-has-stopped-saying-fake-news-is-false-news-any-better.html.

Robertson, Adi. “Mark Zuckerberg’s Wednesday Congressional Hearing Testimony Is Now Online.” The Verge, The Verge, 9 Apr. 2018, www.theverge.com/2018/4/9/17215904/mark-zuckerberg-facebook-congress-hearing-house-of-representatives-testimony-transcript.

Soll, Jacob. “The Long and Brutal History of Fake News.” Politico, Politico, 18 Dec. 2016, www.politico.com/magazine/story/2016/12/fake-news-history-long-violent-214535.

Strickland, Eliza. “AI-Human Partnerships Tackle ‘Fake News.’” IEEE Spectrum: Technology, Engineering, and Science News, IEEE Spectrum, 29 Aug. 2018, spectrum.ieee.org/computing/software/aihuman-partnerships-tackle-fake-news.

“Transcript of Mark Zuckerberg’s Senate Hearing.” The Washington Post, WP Company, 10 Apr. 2018, www.washingtonpost.com/news/the-switch/wp/2018/04/10/transcript-of-mark-zuckerbergs-senate-hearing/?utm_term=.5e143de00395.

Trump, Donald J. “Brutal and Extended Cold Blast Could Shatter ALL RECORDS — Whatever Happened to Global Warming?” Twitter, Twitter, 22 Nov. 2018, twitter.com/realdonaldtrump/status/1065400254151954432?lang=en.

“To Alexander Hamilton from George Washington, 26 June 1796,” Founders Online, National Archives, last modified June 13, 2018, http://founders.archives.gov/documents/Hamilton/01-20-02-0151.

Wichter, Zach. “2 Days, 10 Hours, 600 Questions: What Happened When Mark Zuckerberg Went to Washington.” The New York Times, The New York Times, 12 Apr. 2018, www.nytimes.com/2018/04/12/technology/mark-zuckerberg-testimony.html.

Zuckerberg, Mark. “Hearing Before the United States Senate Committee On The Judiciary and the United States Committee On Commerce, Science And Transportation.” United States Senate, United States Senate, 10 Apr. 2018, www.judiciary.senate.gov/imo/media/doc/04-10-18%20Zuckerberg%20Testimony.pdf.

--

--

Cole Hardman

I’m an engineer with a passion for poetry and literary theory.