Sign in or register
for additional privileges

C2C Digital Magazine (Fall 2021 / Winter 2022)

Colleague 2 Colleague, Author
Cover, page 20 of 23

 

You appear to be using an older verion of Internet Explorer. For the best experience please upgrade your IE version or switch to a another web browser.

Book review: Identifying the informative “unusual” computationally

By Shalin Hai-Jew, Kansas State University





Anomaly Detection:  Techniques and Applications
Saira Banu Atham, Shriram Raghunathan, Dinesh Mavaluru, and A. Syed Mustafa, Co-editors
Nova Science Publishers
2021
177 pp.



A lot of learning happens by observing what happens normally.  In many cases, identifying what happens in unusual contexts may also provide insight.  In the computational space, “anomaly detection” in various available data is used in various practical applications, with real implications on people’s daily lives.  Anomalies refer to data deviations from a normal state of observed behaviors, beyond particular parameters. Anomalies are those data points at the far ends of the min-max range, the ends of the normal curve, the isolate datapoints in scatterplots, the unclustered datapoints in a 2D or 3D data representation.  With multidimensional data, these are the datapoints that do not cluster. 

Saira Banu Atham, Shriram Raghunathan, Dinesh Mavaluru, and A. Syed Mustafa are co-editors of Anomaly Detection:  Techniques and Applications, which highlights some of the techniques and technologies to achieve anomaly detection in various systems.  

Building trust between nodes


N. V. Kousik, R. Arshath Raja, N. Yuvaraq, and S. Anbu Chelian’s “Secured and Automated Key Establishment and Data Forwarding Scheme for the Internet of Things” (Ch. 1) opens with the observation that data security is a major challenge for the Internet of Things’ devices.  An untrustworthy device may result in stolen data and malicious actions. They propose an approach that would enable the “optimal selection of the proxy server in terms of QoS constraints, where the cryptography would be performed” (p. 2) in Wireless Sensor Networks.  They write:  “This work ensures security by doing collaborative key management between sensor, proxies and server” (p. 2), based on the lightweight processing at the nodes.  If two nodes cannot make a trust connection, that connection is not used, and the message reroutes through trusted nodes.  Their technique is termed Secured and Automated Key Establishment using Modified BAT and Fuzzy Neural Network (SAKE-MBAT-FNN) to enable secured key handling (p. 16) for IoT security.  

Evolutionary-based optimization


Algorithms for anomaly detection can harness any number of strategies and tactics to identify zero-day exploits that may end up compromising various technology systems.  These algorithms are ranked based on their level of detection accuracy and their false positivity rates (inaccurately suspecting a zero day when one does not actually exist and ending up with false leads).  False leads can burn time and effort when people try to track down and identify and neutralize threats.  

Vidhya Sathish and Sheik Abdul Khader’s “A Study of Enhanced Anomaly Detection Techniques Using Evolutionary-Based Optimization for Improved Detection Accuracy” (Ch. 2) proposes a method to improve performance of such systems in the wild, pitted against others’ methods and systems in the cyber research community.  Intrusion detection systems often are trained on known malicious signatures, which are then identified and defended against in perimeter security.  Then, there are anomaly detection features that analyze network traffic to identify atypical features and behaviors that may indicate a system compromise under way.  Intrusions have historically followed four patterns:  denial of service attacks, probing attacks “that initiate network scanning with a malicious intent” to access system information, user-to-root access of protected information, and root-to-local attacks to “loot the information through authenticated user” (pp. 20-21).  The point is to not only have the computational “learner” learn from the past but also from the present in an ongoing way and to identify indicators that are actual partial warning signals (vs. noise), even with unknown facts and incomplete information.  Often the data are from the servers’ log data and other captured information. 





Given the speed at which compromises may occur, interventions have to be sufficiently responsive to head off an attack.  Intrusion systems are generally either classifiers (non-malicious or malicious) or evolutionary-based (generally defined as emulating the “pheromone behavior of real species, mammals, ants, etc.) whilst hunting…prey” (p. 22) or hybrid ones combining the two approaches (Sathish & Khader, 2021, p. 23).  In the real world, such systems have to work in different technologies and conditions; they have to work even as a system may be partially failing due to the attack.  They have to be effective as a credible defense (against a world of adversaries) with the lowest-costing computational expense.  

Based on research, a number of known attack methods have been profiled based on particular actions and tells.  They include names such as the following:  Neptune, smurf, teardrop, buffer overflow, perl, Nmap, satan, Multihop, and phf (Sathish & Khader, 2021, p. 26).  The various methods involve various attributes, sequences of attributes, and interactions.  The various approaches, which include cross-validations or other methods to improve performance, are tested on canonical datasets to assess performance with real-world data.  Some of the early approaches show fairly high variances in performance.  With the application of evolutionary-based approaches, the detection rates have improved, some to near 99%, and with false positive rates to a few percent.  

Given the delay in publication and the need to protect methods, such research articles have a lag.  Still, for learners, understanding how the community converges on a solution type based on expertise and empirical data may make for engaging reading.  

Practical technologies and techniques for identifying outliers


In any number of contexts, identifying the outliers in a dataset may be informative.  For example, in a blood lab panel, what numbers are unusual, and what do they suggest about a person’s underlying health?  In a group of employees, what does it mean that a person is high-achieving or low-achieving?  In observing market trades, what does it mean when there is an unusual pattern of trades based on timing, costs, and target trades?  

What is the relevance of a deviated datapoint or a cluster of them?  What makes the anomaly stand out?  What does the anomaly mean?  How is the outlier data relevant in the particular context?  Can the same phenomenon be seen using other data sources? When are such anomalies seen, and why?  What explains the average, the normative, and why?  

Huichen Shu’s “Anomaly Detection and Applications” (Ch. 3) explores some commonly available software programs (both commercial and open-source) and their respective statistical methods that may be used to identify anomalies.  Traditional outlier detection algorithms used “probabilistic and statistical, linear, proximity-based, and high dimensional methods” (p. 47), with varying levels of fit to different data contexts.  [The author notes that a number of available technologies may be used:  Matlab, SPSS, R, and others.  Users may have their own preferences based on different user interfaces and data visualization outputs.]  

Shu reviews the uses of principal component analysis (PCA) and Eventual PCA for outlier detection.  The author writes of a data graph:  “…the points distributions are along two vectors, namely, u1 and u2. Intuitively, there are more samples in (the) u2 direction; the PCA model would regard u2 direction as an output to present the original dataset.  The point of PCA is to find proper directions like u1 and u2 by the covariance matrix.  Logically, points that are generated with the same principle would match the same directions.  So the samples do not match the directions that most observations match and are hence regarded as Outliers” (Shu, 2021, pp. 51-52).    

Another approach involves Support Vector Machines, particularly the One-Class SVM.  Here, the model determines the boundary for membership:  those inside are not outliers.  The svm subpackage in sklearn in Python or the One-Class LIBSVM Anomaly Score in RapidMiner may be used to achieve this (Shu, 2021, p. 54).  In another, neural network modeling is applied to identify anomalies with a high level of perceptrons or layers to capture nuanced details. (p. 56)  Clustering is introduced as an effective method, with the idea that whatever datapoints are not within a particular size cluster are outliers (Shu, 2021, pp. 58-59).  [I’d thought that the author was going to assert something similar with the PCA, that any attributes not pulled into one of the main components would be labeled outliers.]  So, too, with the nearest neighbor types of similarity analyses (pp. 61-62).  There are density-based methods, too (p. 62); high-dimension outlier detection (taking into consideration multiple aspects of the datapoints) [with the caveat that the data must include “proper subsets of dimensions to explore outliers” (p. 67)]; an “isolation forest” approach (pp. 68 – 69), and others.  There is a short section on integrated models at the end.

This work may spark new ideas about how to identify anomalies in less traditional ways.  This work does not offer insights on how to clean data, how to set parameters, or how to approach findings with nuances.  The accuracy and recall assessments of the approaches are based on a limited dataset (a few were used in the various assessments).  

While the author offers some light assessments of the strengths and weaknesses of each approach, the evaluations seem more specific to the data or the particular run vs. a full evaluation of the particular modeling approach for outlier detection.  That said, using statistical tools and modeling from data (and then assessing predictivity outcomes) are important sequences to practice and learn.  Forming that capability to reason through how to conduct statistical analyses is an important professional skill.  Still, it is important not to under-estimate the complexity of such reasoning, and the importance of building on accepted methods within a field before using tools non-normatively.  Some of these approaches read as too freewheeling to be accepted in different academic contexts.  Too many will learn to just start up a computer program, ingest some data unthinkingly, and then come out with a data visualization that they try to publish, even when it is not fully set up or reasoned.   The assertions may be unintelligible or indefensible, at least in relation to the data.  Too many assume that there is a one-button solution to complex data analyses.  It also helps to have some level of intimacy with the data, so that the anomalies, once identified, can be better understood.  

On the Social Internet of Things (SIoT)





The core vision for the Social Internet of Things (SIoT), from 2014, applies a social layer to the IoT.  In this approach, the various objects (sensors, devices, and other technologies) can establish social relationships with each other autonomously.  This way, “friendship” between trusted objects may resulted in higher efficiencies…for people.  The IoT can enable heightened conveniences and use cases for people.  There can be different ways to analyze the data from the IoT.  “Smart” and “social” are combined. 

To enable that vision, the various devices in an IoTs assembly have to be able to interoperate with various other devices, connect in trustworthy ways, identify other devices accurately, represent the self accurately, exchange information without leaking it (to third parties), and enable safe interactions—in efficient ways, even with light resources.  The prior challenges need to be overcome if the SIoT as a paradigm can be realized, according to Dinesh Mavaluru and Jayabrabu Ramakrishnan in “An Evolutionary Study on SIoT (Social Internet of Things)” (Ch. 4).    The creation of trust has to work across the five levels of the IoT:  perception (sensors), network (data transmission and exchange), middleware, application (services), and business layers (pp. 79 - 81).  The rush to market for various IoT devices has meant that security has been overlooked in various IoT hardware, save “a meagre (sic) protection by implementing software-based solution” (p. 82), which is insufficient.  Based on the so-called “CIA triad,” security requires that data is kept confidential, accurate (with integrity, not changed), and available to authorized users (pp. 83-84).  The coauthors suggest the importance of setting regulations and standards for the quality of the underlying technologies in the IoTs and the SIoTs.  They point to various extant standards that need to be adhered to for the various layers of the IoT.  Perhaps as quality is adhered to and particular objects and nodes in the IoTs can select their trusted friends, this will force lesser players in the market out and ensure the retention of quality.  Or it may mean a separating of sociality among the devices within the inner circle and outer ones.  

Reading human emotional states from observed expressions


Jayabrabu Ramakrishnan and Dinesh Mavaluru’s “A Critical Study on Advanced Machine Learning Classification of Human Emotional State Recognition Using Facial Expressions” (Ch. 5) provides an in-depth review of the literature of the applications of deep learning to the identification of human expressions, in dynamic play over human features.  This chapter begins with some contexts in which machine recognition of human emotions may be relevant in human-computer interactions:  “medical treatment, sociable robotics, driver weariness reconnaissance, and numerous other human-computer association frameworks” (p. 94).  

The study of human expressions has had a fairly long history, over generations.  With the advent of machine vision and machine learning, among other advances, various algorithms may be applied to the pre-processing of facial imagery, the extraction of features, and then their classification as having various emotions.  The output variable of the Facial Expression Recognition (FER) system is the identified emotion (happiness, surprise, smile, sadness, anger, fear, or disgust) (Ramakrishnan & Mavaluru, 2021, p. 95) as a prediction.  This review focused on various researchers’ uses of deep learning based classifiers as applied to different facial expression databases.  As research is published, there is a value to reviewing available works and extracting the most important aspects.  Their conclusion:  “The deep learning-based classifier represented the best results in terms of recognition accuracy, which represents an accuracy of 99.6% for facial expressions” (p. 127).  

Anomaly detection for wireless sensor networks


Many locales use deployed sensors to enable smart awareness and other functions, in various environments.  One technological challenge is how to collect data from various sensors and sources, various clusters, in a Wireless Sensor Network (WSN), and transfer it safely in a way that people can access and sense-make and use.  Beski Prabaharan and Saira Banu’s “Anomaly Detection for Data Aggregation in Wireless Sensor Networks” (Ch. 6) proposes a method in which data is moved securely, validated, and stored at the base station.  Anomaly detection is a part of the sequence in order to identify untrustworthy nodes and untrusted data.  Their methods involve both homogeneous and heterogeneous WSNs.  

Detecting anomalous users…in real time…based on call records


“There are still billions of people connected around the world, who depend on the normal phone calls” begins Saira Banu and Beski Prabaharan’s “Algorithm for Real Time Anomalous User Detection from Call Detail Record” (Ch. 7).  Indeed, the long-standing challenge of heading off spam calls (and other types of spam) has been with humanity for many years.  It continues through the present in both developed and developing countries. This work describes a method to enable spam detection by identifying anomalous users from Call Detail Records (p. 151).  The work is aided by a general sense of spam caller features:  the actualizing of bulk messages for “financial or personal gain” and including the use of harassment.  One Android app described uses an identified database of spammers to label the origin of incoming calls on the device (p. 153).  In a summary of other methods, these include different levels of approaches:  system, server-based, or client-based…and depend on categorizing communications based on contents and / or feedback.  The researchers describe the granular contents of a call record (p. 155) and calculate a Trust Value based on selected aspects of the call record.  It is unclear how effective this approach might be, and the process does not seem to have been tested.  

Securing informational transactions over the Web





Syed Mustafa and Madhivanan’s “Secured Transactions from the Anomaly User” (Ch. 8) provides an overview of how informational transactions are secured, with HTTPS secure connection, protected password Hashing, not to expose info in URL, and also…Oauth authorization framework” and the SSL/TLS cryptographic protocol in the transport layer.  The authors describe microservices using REST API and SOAP “to secure the transaction through 2-way SSL using Spring Boot Technology” (p. 159).  For the common reader, it helps to know some of the efforts deployed to protect their uses of the Internet for so many aspects of their lives and their privacy and data.  For developers, this may apply to some professional use cases.  Given the speed of change, it is hard to know what is still timely and what is not.  

Conclusion


Anomaly Detection:  Techniques and Applications, by Saira Banu Atham, Shriram Raghunathan, Dinesh Mavaluru, and A. Syed Mustafa (2021), provides some helpful insights about some foundational practices in “anomaly detection,” described broadly, and applied practically to various socio-technical and other systems.  







About the Author


Shalin Hai-Jew works as an instructional designer / researcher at Kansas State University.  She is working on multiple book projects. Her email is shalin@ksu.edu.  
Comment on this page
 

Discussion of "Book review: Identifying the informative “unusual” computationally"

Add your voice to this discussion.

Checking your signed in status ...

Previous page on path Cover, page 20 of 23 Next page on path