Sign in or register
for additional privileges

C2C Digital Magazine (Fall 2021 / Winter 2022)

Colleague 2 Colleague, Author

You appear to be using an older verion of Internet Explorer. For the best experience please upgrade your IE version or switch to a another web browser.

Morality in a technological world - A dying skill set?

By Desiree L. DePriest, Purdue University Global



Entering the third decade of the 21st century, it is obvious the world is experiencing radical change. Less obvious is one specific cause for this change. It would be more accurate, perhaps, to describe these changes as universal because everything is changing. Other than resigning the world to the “Apocalypse” there is a distributed need for expedient and effective solutions.

A technological world


Exploring technology in myriad forms is a good place to start. There is no way to avoid usage of some type of technology, from mobile devices to vehicle filled with sensors. There are sensors in football helmets, clothes and many children have iPads in kindergarten. We appreciate the surveillance of home security products and our IoT devices, and COVID made virtual meetings acceptable in world political meetings including the United Nations and G-7 conferences. There are benefits and social vulnerabilities in technology. The differences are rarely recognized until after a level of addiction or distress has occurred, and then the affected person goes deeper into the rabbit holes in an attempt to get out.


Figure 1.  Artificial Intelligence (by geralt on Pixabay)





Technology issues are complex because the potential for danger does not begin on the user level. It originates on the Design level of sites like Facebook, LinkedIn, Instagram, and Twitter, and even Google. The design is where the algorithms are programmed to perform a profitable, return-on-investment function. For example, the sensor in football and military helmets, called the NoMo, incorporates electroencephalography (EEG) sensors of the type commonly used in hospitals to measure a patient’s brain activity (Craig, 2019). The sensors are tucked in between the pads of a football helmet. The proprietary algorithm processes the brainwaves, and should it register a physiologically relevant concussive event, an unambiguous alert is sent to the sidelines or command rescue operations (nomodx.com, n.d.). A moral objective is inherent in NoMo which is to save human lives by identifying dangerous neurological changes in real-time. The return-on-investment (ROI) is in the licensing.

This is a prescriptive algorithm, although we cannot dissect it due to its proprietary status. Nevertheless, it can be defined as prescriptive because it is a miniaturized version of the technology used in hospitals to alert of brainwaves indicating a concussion. It is not predictive or persuasive. It is quantitative based on the proven standards of measurement and care. It reports problems that may need diagnosis. This is an important algorithmic distinction from social networks and browsers.


Figure 2.  Calculation (by TheDigitalArtist on Pixabay)




Supercomputers and AI


On the other side of the spectrum, we have supercomputers such as artificial neural networks or ANN. These are algorithms with customer architectures and networks delivering high performance in terms of instructions executed per millisecond (Khan Academy, n.d.). The programmer only needs to specify the rested parallelism within a computation without worrying about communications protocols, load-balancing or other issuing of static-thread programming. A process called “spawning” occurs allowing infinite subroutines to compute concurrently. ANN are self-learning algorithms that require little-to-no human programming after the initial parallelism is in place.

The difference between persuasive algorithms in social media and browsers versus prescriptive algorithms in football helmets is the autonomy given to machine language. The algebraic logic of the ANN creates the connections, recommendations, and predictions given to the users. If the machine concludes that your interests are “x” it will bait you with advertising, articles, and propaganda related to ‘x.” It doesn’t matter if there are 26 other letters in the alphabet. The user assumes they are choosing their direction on a platform. Users also assume they are availed the full choice of options available on the platform. Neither is accurate but for, every part of your user profile has been analyzed by the networks, platforms and browsers, assessed for your value fit to vendors, and a subroutine sends you to the associated, ever-spawning, rabbit hole.  For example, a user is looking for pillows and sheets at “My Pillow™.” It is public information that the owner of this company has strong Republican affiliations in addition to active videos and conferences supporting Donald Trump and conspiracy theories. As a result of looking for pillows, subroutines can easily spawn and lead the user to Republican advertisements, chats, and groups. It is predicting that the user can logically be persuaded to the Republican platform.  The reverse applies as well, a Republican can logically be directed to “My Pillow™.”

Directed political choices


Political persuasion via self-learning networks has arguably become among the most dangerous spawns on social networks and browsers. Politics is the only profession in the U.S. that does not require any prerequisite skills. This includes the President, Congress, and Supreme Court. Americans need training to work in restaurants, collect trash, or teach, but the central pillars of government can be filled by anyone underwritten financially or politically, and invested in a robust set of subroutines on social media.

Political stances from politicians are becoming more aggressive and persuasive toward violence. Compromise, as a core premise of morality, is not a technical capability and ironically, it is also waning in human behaviors. Technology has become the main diet of our brains and can arguably correlate to the cognitive output of the users in ways we are yet to fully understand.

Risks to self-learning algorithms


The need for further examination on the psychic retributions of self-learning algorithms cannot be underemphasized. The human user is being led and manipulated by machines that are incapable of morality. Computer self-learning has many great benefits, but morality is not one of them. When an algorithm is left to its autonomous self-learning directions, it produces algebraic logic exempt from any morality.  Millions of human users take the information offered as gospel, base their entire social position and decision making on what is provided on these networks, and behave aggressively against those with a different point of view. A lack of compassion, a lack of morality.

It is not a simple task to define morality. It is taken for granted like blood type, or the Golden Rule, “Do unto others as you would have them do unto you.” A short stroll through human history reveals how fragile morality can be. Morality is vulnerable to every other malady society is experiencing now. Oil companies, knowing for decades carbons were endangering the climate; nation-building, which has yet to succeed, leaving destroyed, angry cultures all over the world.

Disparate existences


Economic disparity, with its outshoot of “us and them” and “isms” perpetuating misguided fear of the other. Education, which is cyclically attacked by myopic, self-interested intellect-haters as well as sites that sell papers to students while educators are trying desperately to expand students’ critical thinking and appropriate pedagogic abilities. Users habitually believe that every correct answer available on the Internet.

It is certainly more efficient and lucrative for a company to invest in a self-learning neural network, program the basics, and let it whirl. Fewer programmers need to be hired; data does not need to be aggregated in real-time because there are no moral regulations to maintain. There are no humans monitoring real-time activities of the subroutines, resulting in the horrible tweets and posts that are allowed to stay on a platform for days. They are periodically reviewed to market vendors, complete taxes/financial obligations, or if there is a media blitz, drawing attention to it. None of these situations involve any consideration of morality.

Any company’s strategic goal is focused on the product getting sold, and the profits being made. However, we cannot consider the platform the product on social media or browsers because we cannot purchase them. We can purchase advertised products through them and, we can be persuaded by them, but social media is not the product. Reviewing the tax categories for many of these companies, social media is calculated on the number of users the platform can keep engaged to serve those vendors (MIT, 2021). Users are not clearly informed of this or paid for their information, which is also a moral issue. Without consent, users are among the products the platform is selling.


Figure 3.  Wrong Way Right Way (by geralt on Pixabay)




Social networks and browsers are not "friends"


The moral of this article is stop looking at social networks and browsers as your friend. They are machines and incapable of morality. Use online technology for the specific reason you engaged it, and cross reference information with experts, books, newspapers, and human face-to-face associates. Avoid click-bait taking you into rabbit holes, or social groups you were not seeking. User click behavior or user navigational behavior is collected ANN’s algorithms to derive persuasive insights. The technology retains your history, and many encounters are not human interactions but bots. Disinformation, misinformation, and fringe groups are a real problem but not the point of origin for our current dilemmas. The algorithms, like second-hand smoke, set the design for bad habits, illness, and addiction. Until self-learning neural networks are federally/internationally regulated or banned from social media and browsers, and users stop being vulnerable to its brainwashing, morality will continue to erode as a dying skill set.



References



Craig, D. J. New Smart Helmet Could Spot Concussions in Real Time. Columbia Magazine. Winter 2018-2019. Retrieved from
https://magazine.columbia.edu/article/new-smart-helmet-could-spot-concussions-real-time

Khan Academy. Measuring an algorithm’s efficiency. Created by Pam Fox. Retrieved from
https://www.khanacademy.org/computing/ap-computer-science-principles/algorithms-101/evaluating-algorithms/a/measuring-an-algorithms-efficiency.

MIT (June 16, 2021). The case for new social media business models.  Created by Sara Brown. Retrieved from
https://mitsloan.mit.edu/ideas-made-to-matter/case-new-social-media-business-models

NoMo Diagnostics. Retrieved from https://www.nomodx.com/contact





About the Author






Desiree L. DePriest has been an IT/AI business intelligence professor at Purdue University Global for 16 years. Desiree’s expertise is in business intelligent information systems and artificial intelligence in business environments.  She holds a Ph.D., in Management & Organization with emphasis in Information Technology, along with two master’s degrees (Telecom and IS respectively). Desiree has a Bachelor of Science degree in psychology and certificate in ABA and I-O psychology which greatly assist her work in the various areas of business intelligence, industrial and organizational motivation, and attitudes. She is the Vice-chair of the Institutional Review Board at Purdue Global and attended UMKC Law School.

Desiree developed and directs the Purdue Global Internship Program – Technology (PGIP-T) which is an internship for IT and business students wanting real world experience and certificates prior to graduation. She also created the Graduate Information Technology Association (GITA) for active and alumni IT/Business students and serves as Faculty Advisor. Desiree has won the “Best Practices” award for her work in the internship from the American Association of Adult Continuing Education (AAACE). She also received Honorable Mention for the Purdue University Global Accomplished Learners Award for implementing ways to increase graduation rates, student satisfaction, retention, employment, and learning. Her publications include research in persuasive and predictive analytics, artificial intelligence and algorithms in decision support, and pattern recognition. Other intellectual contributions include The Intentionality of Systemic Racism and Perpetual Trauma – The Effects of Poison-Privilege (book chapter, 2020). Desiree’s recently completed a German textbook in Intercultural Management (2020). interests continually expand to neural correlates of consciousness (NCC), cognitive computing (CC) and quantum teaming (QT). Quantum Teaming© is a quality management methodology with particular focus on virtual team environments and is the intellectual property of Dr. DePriest. Desiree presents throughout the year at conferences in these areas and is an active member of several professional organizations.
Comment on this page
 

Discussion of "Morality in a technological world - A dying skill set?"

Add your voice to this discussion.

Checking your signed in status ...

Previous page on path Cover, page 6 of 23 Next page on path

Related:  (No related content)