Sign in or register
for additional privileges

C2C Digital Magazine (Spring / Summer 2021)

Colleague 2 Colleague, Author

This page was created by Anonymous.  The last update was by Shalin Hai-Jew.

You appear to be using an older verion of Internet Explorer. For the best experience please upgrade your IE version or switch to a another web browser.

Behind every algorithm, there is a human coder…or is there?

By Desiree L. DePriest, Purdue University Global




In the 21st century, globalization and the associated intercultural relations are the norm regardless of individual choices to participate. Individuals can have a variety of thoughts on protecting themselves against unwanted tracking or have an inclination toward protectionism but the reality remains there is a guarantee that some form of technology consumed will not be made in the United States. There is also a guarantee that a process or set of rules calculated to guide online operations are guiding your activities across the internet. These mathematical instructions, frequently written by programmers, are called algorithms.

Human Tracking


As an effect of being inundated by hidden instructions from external forces, more of us are getting dummied down than enhanced cognitively. Those who busy their thoughts on conspiracies ranging from chips in vaccines to "the mark of the beast" are becoming less intelligent because that chip has sailed. Arguably, even before sensors and chips there were social security numbers assigned at birth (through hospital, midwife, or other locations) to each individual that follows the person throughout perpetuity. Today, various algorithms gathering everything from keystrokes, voice and facial recognition, through sensors on money/credit cards, driver’s licenses/plates and streetlights are aggregating data about each person that does not require their consent. A person may not be on social media personally but at least one person you know is, meaning the data invariably cycles around to include you. Last, every search performed on the internet has a “?” in the url which in algorithmic speak translates to “track the location/IP-address where this individual is searching from.” Even automobiles, with GPS and Bluetooth, are in the game. Simply put, whether you identify as Hansel or Gretel, "cookie-crumbs" follow you day in and day out.

Clarifying the dynamic effects of the situation humans find themselves in, it is logical to seek out deeper causations of how global tracking came to be. The clever analogy can be gleaned from the original “A Star is Born” movie, where Kris Kristofferson sings a lyric, “Admission’s free but you pay to get out.” Initially, the internet was people-driven; we shared blogs, garden photos or emails. There was no direct profit-motive. The corporations emerging onto the internet offered free social networks where a person simply signed up. User agreements were double-speak legalese that most of us did not read yet agreed to, thinking no harm/no foul since it was “free.” These agreements allowed for the corporations to get paid by third-party advertisers and more importantly, through slicing and dicing all user information to produce sellable demographics.

You are a Known Quantity


For example, a company selling arthritis medicine is probably not interested in people 18-25 years old but can benefit in targeting Boomers. You can be targeted through the social media of your 18-26 year old children if you follow their pages, which most of us do for parental reasons. Regardless, given the IP-location of the individual tightly associated with a characterization of that person (age, gender, political affiliation, and more) saves the time and expense of marketing country-by-country, allowing for fund redistribution invested into blowing up your email and every site you visit, and persuading you to buy arthritis medication. Many people are not consciously aware of advertisements that seem to coincidentally follow their internet activities after visiting a site with the same product. Nevertheless, this is what happens, visually or through auditory reminders, and it sticks in the mind more scientifically than we think. A phone on vibrate can ‘Pavlov’ a person completely out of a face-to-face conversation to reach for the device instead.

These corporations are well beyond the guessing games of the early 1990s. Their teams consist of psychologists, sociologists, visual, linguistic and auditory specialists, and of course, the algorithm development team. In seeking the dopamine-driven feedback, engaging as many natural senses as possible, is the baseline endeavor for the social scientists, and it would be hard to believe they are not aware of exploiting a vulnerability in human psychology (Lanier, 2018, p.8). On the other hand, forming algorithmic linkages to more and more similar or redundant sites defaults to the coders. There is no longer profitability in universal truths, Socratic discussions, or the liberty of information being presented to reflect a spectrum of choices. The algorithms are designed to make the choices for you through compartmentalizing reality into mostly binary, digital forms that machine language can build upon. Going through and beyond advertisements, this information is locked and loaded for corporate decision-making in banking, real estate, race, politics, and health. It is aggregated from your DIY security systems on your homes and even the chip in your animal. Over time, any information outside of the algorithms created around the user’s activities, creates a form of cognitive dissonance. Any ideas that are not consistent with the psychological bubble elicits a negative response or conspiratorial lens. It would be more efficacious to use an alternate hypothesis in writing this article, “There is no aspect of the human life on social media that is not affected by algorithms.”

The definition of socialization, per se, does not include inanimate objects such as machines. Yet, at the root of what is happening in a perpetually-technology-connected world is an augmented social process. The advances of machine learning such as neural networks and various artificial intelligences are intensifying the associations. These are code used by programmers to keep the formulae behind the initial algorithms self-generating or “learning” the user’s habits. This is not an efficiency to improve thought, convenience, or to spread accurate information but to speed up the reinforcing dopamine rush which increases consumption/addiction. Neural networks and AI don’t think so therefore cannot have any furtherance of definitional socialization. Networks receive the data instructions from the algorithms, grab your mind and run with it, through pre-coded mathematical, logical patterns. If a user visited Walmart.com yesterday, Walmart will pop-up on their computer today. A huge amount of deference are given to these networks from political election polls to justifying pharmaceutical companies. These algorithms will tell the user, a middle-class white woman to vote for Hansel, versus the woman who shares their best interests, Gretel. This could be a result of shopping for pillows on a conservative website. Nothing positive about Gretel will pop-up in the woman’s searches. The woman, based on limited information, may vote against her best interest because she, along with their peer group, fits the promoted demographic. Others may vote against Gretel based on the aspiration of being considered middle-class, female and white. Additionally, algorithms will disseminate articles about maladies that big pharma wants the FDA to approve to expand the distribution pool for drugs like Xanax; one malady most recently introduced is “social media dysmorphia syndrome.” Instead of the user deleting their social media, the core problem indicated in the very name of the problem, they will go to their physician and ask for the pill. They may even create an online blog group so more people can attach to having the syndrome – on social media.

Commerce Over Conscience


Figure 1.  What Future? 






While the United States continues to place commerce over conscience, intercultural concerns in the European Union (EU) recognize the societal and economic transformations, risks and challenges when leaving these corporations and nation states unregulated. Corporations going to and fro globally, implementing opaque predictive and persuasive algorithms upon citizens, shaping and transforming communications, connection and consumption is not okay. This is especially concerning when addressing cross-border, national and domestic security matters (Eur-Lex, 2000). When discerning a solution for these issues, it is not strictly the machines but the various business plans with perverse incentives manipulating the importance of digital technologies in the lives of citizenry, while omitting core principles and the protection of fundamental rights to anonymity in the online environment. The dispiriting side effects of policy-tweaking is each cycle of debate placing fault upon platforms and/or bad actors motivates more people to demand the companies to develop more and more algorithms. The companies are pressed to develop algorithms to govern the speech, and rid the internet of malicious falsified news, bullying, racism, identity deception and other nasty things (Lanier, 2018).

These various inaccuracies (and revenue generating possibilities) are not lost on tech manufacturers or other corporations some of which give good facetime to these issues while moving profit plans forward. IBM, Microsoft and others have taken steps to recognize, understand and remove bias in their AI machines (Lohr, 2018). Concurrently, companies like Alphabet (Google, YouTube) and Facebook (Instagram, Messenger) depend, to some extent on these biases continuing. Nation states like China and Russia, are in the algorithmic business of concentrating power. The commitments to ethical AI algorithms are ambitious in a world where the biases in humanity, aggravated by the demographic segmentation the algorithms are designed to reveal, seem impenetrable. Generally, the process is as follows: 1) business plan; 2) the coder is given a set of criterion; 3) the code is developed and input into a self-organizing AI network; 4) the network produces the segmentation, which results in, 5) browser searches limited to the criteria fit to that user. The repetitive persuasion of this process, over time, conditions the information, misinformation and disinformation of the user’s perceptions of reality. There could be some kind of existential benefit in branding the global population according to corporate or national criteria for consumption, no doubt. Wealthy individuals spending millions of dollars on a ticket to go into space before the ship has even been designed, upgrading your mobile device to 5G before the technology is actually functional in distributed spectrums, or believing there is a child-pornography ring under a pizza shop are a few out of many questionable behaviors to ponder who benefits.

Black Box Syndrome


Importantly, with all the congressional sessions addressing tech companies, the line cannot be drawn in regulations alone. There are maladies in the technology itself. “Black box” syndrome, which is a general term for the problems that arise in using complex mathematical and statistical algorithms is among the manipulators (Rudin and Radin, 2019). In science and engineering, experts know the inputs and outputs from these machines, but not the inner workings. The bounty of the confrontation is accepted, the computation is trusted, without any knowledge how many independent thinkers were muted, or how many backpropagations/error adjustments were performed to fulfill the business plan.

Let’s use as an example "facial recognition," where metadata of human faces are input into a neural network and then correlated with driver’s license or passport photos, criminal mugshots, employee identification, or just random pictures taken by some government agency as people walk down the street. This input provides the network exposure to myriad complicated sets of positions. More likely than not, the initial algorithms were written by a cisgender male from a homogeneous background who has experienced little to no significant interactions with the people-type in the dataset. The probability of the code having biases input into the network, which are then aggravated by the lack of transparency associated with black box problems, makes significant errors in output probable if not guaranteed. Current research shows, overwhelmingly, people of color are disproportionately arrested, along with all the subsequent traumas, based on mistaken facial recognition technology. The machine also has the ability to digitally manipulate and edit visual input, similar to GIMP. This is not a coder being a racist and certainly a machine cannot independently be racist. This is the process of developing a product within the expanse of the corporate business plan, combined with the machine’s job to output unlimited data from the coder’s limited experiences. The biases are more omissions based on incomprehensive data, particularly machines not knowing human faces are not logical (as in not always quantitatively symmetric, not adhered to color-palettes, or capable of changing). Also, different choices of languages can lead the machine to apply different measures of association. All equations are not effective in fitting every type of data. One widely used facial-recognition data set was estimated to be more than 75 percent male and more than 80 percent white (Lohr, 2018). Yet, facial recognition can destroy lives forever, like the results of Oppenheimer’s atomic bomb, with every disruptive mistake.


Figure 2.  Less Than Greater Than



Designed Open Prisons


Unpeeling another layer of this onion takes us to the sentient human coder who develops this "open-prison" in the first place. What would motivate a software developer or programming engineer to code a system that completely abnegates the ideal of privacy, free-will and empathy? Is this a contemplation the coder has with themselves or is the probable harm realized Oppenheimer style (after the fact)? Does agile task segmentation (Casado-Garcia, et al. 2019), meaning that developing algorithms is modularized across many people (all over the world) not just one, allow for these individuals to only see the tree they personally planted but not the shared root-systems that forests, organically linked, propagate for ultimate survival? So many errors are made in the initial development of all algorithms, most programmers are lost in the labyrinth of versions and achieving working products rather than the meaning of that product.

When we look at the current world situation, an undermining of information, ethics, and privacy worldwide, what is it that the coder, not the algorithm, is truly distributing? Among the possible answers is global and intercultural cognitive dissonance which could be an innocent result but full of potential repercussions, similar to Einstein discovering quantum mechanics, then rejecting it as outside scientific cognition although mathematically sound. Former presidents and vice-presidents of Facebook, coders themselves, understand the dilemma as starting with dopamine-driven business plans/algorithms that result in destroying how society works. If this is the case, it can be difficult to separate these business plans from Purdue Pharma’s distribution of opioids or Philip Morris’ distribution of cigarettes. All involve the erosion of the core foundations of how people behave individually and between each other (Lanier, 2018, p. 9) based on profit incentives. Another societal effect is a citizenry fragmented by delusions of grandeur where we find an insecure human population relinquishing its opportunities to integrate diverse information because the people are unknowingly victims of specific and tailored output, to fit a demographic requested by the third-party client. The individuals have been led to peer groups online through algorithms, equally unaware that their perceptions are persuaded by algorithms, and produced through machines.

In full disclosure, scholars cannot attribute this to a completely new phenomena in human actions, especially 20-years strong, IT professors like this researcher. Educators teach history with full knowledge it omits far too many contributions from people-of-color, and in many writings, are completely untrue. Governments, including the United States, indoctrinate its citizens to an “ideal” that has never, after centuries of promises, been fulfilled. Nor, are any of us sure that this “shining city on a hill” is anything more than a dopamine-loop continuing from one administration to another. From these analogies, power and mind-control over one’s fellow citizen, is an ongoing flaw in human nature. Some believe this is their manifest destiny. Unfortunately, even with the global reach and non-disclosure of algorithms, the more mundane possibility is that the coder is simply doing their job in a world, detached from any societal perceptions, with no introspection or thought outside the paycheck received, providing for their family and expertise in a particular programming language. It is not speaking the same language that everyone else in the country speaks; there is no attunement with what other surrounding people are saying, no way to accurately interpret their actions (even smiles) or knowledge of activities going on in the community as a whole. The coder is making a living and if this creates unchallenged subjectivity heightening anxiety and defensiveness in the users, that’s their problem. The coder is no more analyzing societal impacts from the algorithms than the history teacher is measuring their contribution to the generational, systemic white supremacy when attributing Western innovation to those from European descent alone. All scenarios created from false narratives result in some form of violence and societal divisions while operating under the guise of integrating civil society. Indications of this are evident in the wealth gap between the addicted users and those who own the algorithm machines. If citizens are merely existing in the limitations of a connected world, where all we can do is submit to an algorithmic thought prison, how can we trust in our external reality, our internal reality, and the unknown?

Human Co-Agents to Algorithms? 


Whether it can be accurately said that humans are behind every algorithm or merely co-agents promulgating more repetitive, conditioned responses is an overarching conversation between technology platform owners and thoughtful critics. This would seem to indicate that the solution IS in people, not specifically programmers. Awareness can contribute to gradual change for the better. However, these conversations cannot be mediated through machines, or uninformed politicians and educators, but spoken through our shared humanity. A healthy curiosity in other’s perspectives, not as battle lines but as seeking inclusion is a human quality. If one real or imagined demographic believes it can survive without its diverse counterpart, there can be no thriving society. Returning to the ‘divide and conquer’ thoughts of the past is of no benefit when the forces of time direct humanity forward. The shining city on a hill must be defined by understanding and holistic interest in the genuine human versus incessant methods toward creating consumer behavior. Cortana, Alexa and Siri are not real people, the majority of interactions on social media are also bots. No matter how consistent the responses, they are fake. They are designed to modify thinking and subsequent behavior toward a transactional demographic to sell. Each person needs to take their thoughts back, their sense of truth back, their will-to-good and values back from the trappings of social media platforms. This needs to happen before reality is completely dissolved into the self-organizing black box of algorithms.
 

References



Casado-García, Á., Domínguez, C., García-Domínguez, M. et al. CLoDSA: a tool for augmentation in classification, localization, detection, semantic segmentation and instance segmentation tasks. BMC Bioinformatics 20, 323 (2019). https://doi.org/10.1186/s12859-019-2931-1

Facial Recognition Works best if you’re a White Guy. New York Times, pg. B1. Retrieved from https://courses.cs.duke.edu//spring20/compsci342/netid/readings/facialrecnytimes.pdf.

Lanier, J. (2018). Ten arguments for deleting your social media accounts right now. Picador/Macmillan Publishing Group, LLC.

Lohr, S. (February 12, 2018). EUR-Lex. Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on a Single Market For Digital Services (Digital Services Act) and amending Directive 2000/31/EC COM/2020/825 final.

Rudin, C., & Radin, J. (2019). Why Are We Using Black Box Models in AI When We Don’t Need To? A Lesson from an Explainable AI Competition. Harvard Data Science Review, 1(2). https://doi.org/10.1162/99608f92.5a8a3a3d





About the Author





Dr. Desiree L. DePriest is an IT/AI and instructional design professor at Purdue University Global. Desiree’s expertise is in information systems and artificial intelligence in business environments.  Desiree earned her
Ph.D., in Management & Organization with emphasis in Information Technology from Capella University in Minneapolis, Minnesota. She has been a professor of IT/AI for 20 years, along with teaching and expertise in analytics and instructional design. Her two masters degrees (Telecom and IS respectively, combined with a Bachelor of Science degree in psychology, two certificates in ABA and I-O psychology greatly assist in her work in the various areas of business intelligence, industrial and organizational motivation and attitudes. She is the Vice-chair of the IRB.

Desiree created an experiential internship/technology company at Purdue University Global for IT and business students wanting real world experience prior to graduation. She also created the Graduate Information Technology Association (GITA) and serves as Faculty Advisor.  Desiree won the “Best Practices” award for her work in the internship from the American Association of Adult Continuing Education (AAACE). Her publications include research in persuasive and predictive analytics, artificial intelligence and augmented reality, IoT and pattern recognition. Desiree’s recent interests have expanded to neural correlates of consciousness (NCC), quantum teaming (QT) and cognitive coupling (CC).  Quantum Teaming is the equivalent to other quality management methodologies with particular focus on virtual team environments. Desiree presents throughout the year at conferences in these areas.  Desiree is currently writing a textbook titled, “Intercultural Management” for IUBH International University of Applied Sciences, Germany.  She is the author of Quantum Teaming: The Primer, available on Amazon.com. The fully-exploded Quantum Teaming book is scheduled for publication later this year.

Her email is DDePriest@purdueglobal.edu. 


Comment on this page
 

Discussion of "Behind every algorithm, there is a human coder…or is there?"

Add your voice to this discussion.

Checking your signed in status ...

Previous page on path Cover, page 9 of 21 Next page on path