Book review: A broadscale view of applied computer programming
By Shalin Hai-Jew, Kansas State University
Research Anthology on Recent Trends, Tools, and Implications of Computer Programming (4 volumes)
Information Resources Management Association
IGI Global
2021
2069 pp.
To harness the power of computers, people learn about computational thinking. They study how they can take their own subject matter expertise in various fields and collaborate with computer scientists and programmers to actualize programs for research, data visualization, teaching and learning, among other endeavors. The human-computer collaboration is an important one to advance fields. In the public mind, perhaps, coding is for everyone in a democratized sense. Even preschoolers through those in elder adulthood are training in computational thinking.
Many others are learning how to code directly, so they can make their own lightweight programs and scripts to augment publicly available software programs. Some may study more deeply to contribute to larger-scale projects. A few may make their own games or applications, which they may share on various online stores or make available on websites. So many public facing services now go with a “no-code” model, enabling people to sequence what they want to achieve without needing to know how to program. Those who want to see the underlying code of their programs may view the code levels. Regardless, understanding something of how computers work and ways to speak to computers with various high-level languages and others benefit the learner. Without programming literacy, people may miss much of the digital parallel world and rich enablements.
Research Anthology on Recent Trends, Tools, and Implications of Computer Programming (4 volumes) is a compilation of academic works by the Information Resources Management Association of IGI Global. This work comes in at a whopping 2069 pp. and is comprised of 93 chapters that were published in other books originally with this publisher. A reader comes away from this work with the sense that it is a more elite group that actually develops the software professionally who are most influential, and perhaps some who work on large-scale open-source projects. And then there is the rest of everyone else at the shallow end. This collection offers deeper insights about the state of computer programming and macro-level trends.
This collection opens with Surajit Deka and Kandarpa Kumar Sarma’s “Joint Source Channel Coding and Diversity Techniques for 3G/4G/LTE-A: A Review of Current Trends and Technologies” (Ch. 1), a theoretically rich work about practical ways to enable efficient and stable network communications based on different encoding approaches. This chapter posits that a joint source channel coding techniques approach with multiple inputs and multiple outputs can be optimal for such communications networks, given constraints. In this work they propose a particular architecture, through various equations, diagrams, and descriptions.
Various studies and anecdotal observations suggest a high failure rate of software projects. Creating software is complex, and there are many requirements for the technologies. In the past decades, various endeavors have been tried to smooth the process of software development, including ontologies to “improve knowledge management, software and artifacts reusability, internal consistency within project management processes of various phases of software life cycle,” according to Omiros Iatrellis and Panos Fitsilis’s “A Review on Software Project Management Ontologies” (Ch. 2) (2021, p. 27). Their review of the literature results in the finding of the “lack of standardization in terminology and concepts, lack of systematic domain modeling and use of ontologies mainly in prototype ontology systems that address rather limited aspects of software project management processes” and other limitations in the space (p. 27). The coauthors observe that the main purpose of software engineering “is to identify precisely what are the repeatable and reusable procedures in software development, and to support, regulate and automate as many as possible while leaving as little as possible for mental-intensive work” (p. 28). Some of the methods include recognizable approaches like “Waterfall, Prototyping, RAD (Rapid Application Development), Incremental, Spiral, UP (Unified Process), XP (Extreme Programming), Scrum, etc.” (p. 28). While the goal is to improve software developer productivity, code quality, software functionality, effective collaboration, as few bugs as possible, and helpful documentation, this is complex work with both high intrinsic and germane cognitive load demands. This work provides a brief history of the various methods and their evolutions over time but is more secondhand and theoretical than direct and applied insights. What actually works in Software Project Management (SPM) may depend on local conditions.
If there are typical paths for the development of software, Jan Kruse’s “Artist-Driven Software Development Framework for Visual Effects Studios” (Ch. 3) suggests that some approaches are more unusual than others. Over the past three decades, the film industry (from indie to mainstream commercial filmmakers) and particularly the visual effects studios have informed software development and its commercialization. The film industry requires effective visual effects and invest a fair amount of the producer’s budget into such technologies and the skills to wield them for the entire visual effects pipeline: pre-production to the final rendering. Some movies require proportionally much more in the way of visual effects as a percentage of the overall film, at present.
The author describes the Artist-Driven Software Development Framework as visual effects studios requiring particular visual effects and applying developers to that work. From those direct experiences, Kruse suggests that the innovations may be integrated into the software tool and applied to commercial software programs to the benefit of all stakeholders. Perhaps there would be a net positive if “a visual effects studio publish(es) proprietary tools as soon as possible and in close cooperation with an existing software company” to further extend the designed solution and to increase market acceptance of the tool and approach (Kruse, 2021, p. 58). The visual effects studio may benefit from being known for fx innovations (p. 59), even as they’re giving away IP. The general sequence of this artist-driven framework is the following: research (in-house), prototype (in-house), and product (external) (p. 62).
One real-world example involves Deep Compositing and other technological innovations and fx special effects. There are heightened efficiencies, so that the look-and-feel of a shot may be changed in a percentage of the time required for a “full re-render” (p. 57). The researcher writes:
In terms of in-house development of such innovations, apparently, only a few players in the space have that capability. The examples discussed in the chapter are with companies with about 1,000 employees that can engage (p. 61). The talent sets are expensive and perhaps rare.
Edilaine Rodrigues Soares and Fernando Hadad Zaidan’s “Composition of the Financial Logistic Costs of the IT Organizations Linked to the Financial Market: Financial Indicators of the Software Development Project” (Ch. 4) identifies a range of financial indicators from software development projects. These include both fixed and variable costs. These include salaries and other inputs. Their variables then include anticipated return on investment (ROI), such as anticipated sales and other elements (p. 74). These are integrated into an equation comprised of the following: gross revenue, sum of expenses (fixed and variable), percent of profitability, and percentage of taxes per emitted invoice (p. 78). For IT organizations to be attractive to investors, competitive in the marketplace, and an integral part of the supply chain, the calculating of the “financial logistic costs of the information management of the software development project in the IT organizations” may be an important approach (p. 85). The method described in this work projects costs a month out.
Janis Osis and Erika Nazaruka (Asnina)’s “Theory Driven Modeling as the Core of Software Development” (Ch. 5) describes the current state of software development as “requirements-based with chaotic analysis” (p. 93). Further: “The four most expensive software and activities (in decreasing order) are: finding and fixing bugs, creating paper documents, coding, and meetings and discussions” (p. 90). So much of programmer time is spent fixing bad code. Software projects are rife with “budget and schedule overruns” (p. 89). Software engineering is “in permanent crisis” (p. 91). Overall, software development is “primitive” and resistant to “formal theories” (p. 90).
While various models of software have not necessarily been proved all that useful, the coauthors propose a Model Driven Architecture (MDA), with “architectural separation of concerns” for formalizing software creation as an engineering discipline. MDA uses “formal languages and mechanisms for description of models” [Osis & Nazaruka (Asnina), 2021, p. 92]. The innovation proposed involves bringing mathematical accuracy into the very initial steps of software development and throughout the other stages beyond requirements gathering, including analysis, low-level design, coding, testing, and deployment. The model proposed here includes Topological Functioning Model, which uses “lightweight” mathematics and include “concepts of connectedness, closure, neighborhood and continuous mapping” (p. 98). Using TFM to define both the “solution domain” and the “problem domain” to inform the software development requirements (p. 100) may benefit the practice of software design and control some of the complexities.
Richard Ehrhardt’s “Cloud Build Methodology” (Ch. 6) reads as an early work when people were first entering the cloud space and learning the fundamentals: differentiating between public, private, and hybrid clouds; understanding the various services provided via cloud (IaaS, PaaS, SaaS, DaaS, and even XaaS, referring to infrastructure, platform, software, desktop, and “anything”…as a service); and conceptualizing cost drivers in building out a cloud. This researcher describes “Anything as a Service” (XaaS) as “the extension of cloud services outside of purely infrastructure or software based services” (p. 112) and may include service requests for “data centre floor space or even a network cable” (p. 112). Ehrhardt describes a componentized “data centre, infrastructure layer, virtualization layer, orchestration and automation, authentication, interface, operational support services, and business support services” (p. 112). Some of the provided information reads as dated, such as the metering of services (vs. the costing out of services described here), the cloud provider professionals who help set up cloud services (vs. the sense of customers having to go-it-alone), and so on.
Abhishek Pandey and Soumya Banerjee’s “Test Suite Minimization in Regression Testing Using Hybrid Approach of ACO and GA” (Ch. 7) begins with the challenge of identifying “a minimum set of test cases which covers all the statements in a minimum time” and prioritizing them for testing for optimal chances of detecting faults in the code base (p. 133). Software testing can be time consuming, labor- and resource-intensive, and requiring sophisticated analytical skills and meticulous attention to details. Software developer attention is costly and in high demand and short supply. To aid in the software testing effort, various algorithms are applied to identify potential challenges. Regression testing is common in the “maintenance phase of the software development life cycle” (p. 134). In this work, the researchers use “a hybrid approach of ant colony optimization algorithm and genetic algorithm” (p. 133); they strive for metaheuristics. Various methods are assessed and analyzed for performance and “fitness” through statistical means.
Chhabi Rani Panigrahi, Rajib Mall, and Bibudhendu Pati’s “Software Development Methodology for Cloud Computing and Its Impact” (Ch. 8) points at a number of benefits of developing software in the cloud, given the customizable environment there, the ease of group collaboration, the speed to deployment, the ability to harness other enterprise solutions, and the ability to scale the effort (p. 156). This work involves evaluation of some of the cloud computing programming models (such as “MapReduce, BSPCloud, All-pairs, SAGA, Dryad, and Transformer” and their respective pros and cons for programming in the cloud (p. 162). Cloud computing “allows parallel processing; provides fault tolerant functionality; supports heterogeneity; (and) takes care of load balancing” (p. 163). It enables organizations to capture user feedback and implement changes more quickly. The public cloud has limits and is not advised for “systems that require extreme availability, need real-time computational capabilities or handle sensitive information” (p. 169). This work describes the application of agile development, often with lean teams of 5 to 9 people. As the software evolves through the various stages—requirements gathering, analysis, design, construction, testing, and maintenance—changes to the software at each phase becomes both more costly and complex (p. 153).
Jeni Paay, Leon Sterling, Sonja Pedell, Frank Vetere, and Steve Howard’s “Interdisciplinary Design Teams Translating Ethnographic Field Data Into Design Models: Communicating Ambiguous Concepts Using Quality Goals” (Ch. 9) describes the challenges of using complex ethnographic data to inform design models. They use “cultural probes” (as a data collection technique) to learn about “intimate and personal aspects of people’s lives” (Gaver et al., 1999, as cited in Paay, Sterling, Pedell, Vetere, & Howard, 2021, p. 174) as related to cultural aspects of personal and social identities. On collaborative projects, there is the importance of having “a shared understanding between ethnographers, interaction designers, and software engineers” (p. 173). This team suggests the importance of having defined quality goals in system modeling (p. 173). They suggest the power in maintaining “multiple, competing and divergent interpretations of a system” and integrating these multiple interpretations into a solution (Sengers & Gaver, 2006, as cited in Paay, Sterling, Pedell, Vetere, & Howard, 2021, p. 183). They describe the application of social and emotional aspects to the design of socio-technical systems. Their Secret Touch system enables connectivity between various agents in multi-agent systems. This particular system includes four: “Device Handler, Intimacy Handler, Partner Handler, and Resource Handler” (p. 191), informed by how couples and other groups interact to inform the design of technical systems (p. 195).
Nancy A. Bonner, Nisha Kulangara, Sridhar Nerur, and James. T. C. Teng’s “An Empirical Investigation of the Perceived Benefits of Agile Methodologies Using an Innovation-Theoretical Model” (Ch. 10) explores Agile Software Development (ASD), in particular, to see if such approaches promote constructive and innovative work. Agile development is about “evolutionary development and process flexibility” (p. 208), two software development practices that the team suggest would be effective in mitigating some of the complexities of software development projects. Evolutionary development, a “cornerstone of agile development”), is found to benefit software developer work (p. 202, but “process flexibility” as not having an impact on “complexity, compatibility, and relative advantage” (p. 202), based on research and empirical data. [Agile software development is generally known as a method which brings together lean cross-functional teams and “advocates adaptive planning, evolutionary development, early delivery, and continual improvement, and it encourages flexible responses to change,” according to Wikipedia.]
This work studies two dimensions of development process agility: “evolutionary development and process flexibility,” which are thought to have effects on developer adoption (Bonner, Kulangara, Nerur, & Teng, 2021, p. 207) . At the heart of the research is a survey with responses from a heterogeneous random sample of international professionals in IT. Some findings, at a statistical level of significance, include that Evolutionary Development is “negatively related to perceived complexity of the development methodology” but not so for “process flexibility” (p. 220). Development methodology is seen as advantageous with less perceived complexity (p. 220). Also: “evolutionary development” is related to “perceived compatibility of the development methodology” (p. 220) but not “process flexibility.” The researchers found support for the idea that “Evolutionary Development of the development methodology will be positively related to perceived relative advantage of using the development methodology” but not for “process flexibility” (p. 220).
Shailesh D. Kamble, Nileshsingh V. Thakur, and Preeti R. Bajaj’s “Fractal Coding Based Video Compression Using Weighted Finite Automata” (Ch. 11) describes how video is often compressed based on temporal redundancies (changes between frames over time) and spatial redundancies (among proxemic or neighboring pixels). Various methods have been proposed for video compression based on performance evaluation parameters including “encoding time, decoding time, compression ratio, compression percentage, bits per pixel and Peak Signal to Noise Ratio (PSNR)” (p. 232). They propose a method of fractal coding “using the weighted finite automata” (WFA) because “it behaves like the Fractal Coding (FC). WFA represents an image based on the idea of fractal that the image has self-similarity in itself” (p. 232); both approaches involve the partitioning of images into parts and observing for differences against a core visual. They tested their approach based on standard uncompressed video databases (including canonical ones like “Traffic, Paris, Bus, Akiyo, Mobile, Suzie” and then also on videos “Geometry” and “Circle” (p. 232) to enable the observation of performance on different digital video contents. By itself, fractal compression is a lossy compression technique, and the addition of weighted finite automata (WFA) may lessen the lossiness. The experimental setup, using MATLAB, involves assessing the speed of the processing, the relative file sizes, and the quality of the reconstructed videos. The coresearchers write:
They also found better visual quality “where different colors exist.” Specifically:
They did observe some problems of artifacts in the reconstructed videos.
Meifeng Liu, Guoyun Zhong, Yueshun He, Kai Zhong, Hongmao Chen, and Mingliang Gao’s “Fast HEVC Inter-Prediction Algorithm Based on Matching Block Features” (Ch. 12) proposes “a fast inter-prediction algorithm based on matching block features” with advantages in speed, coding time, and improvement in the peak signal-to-noise ratio” (p. 253). “HEVC” represents “High Efficiency Video Coding,” an international video compression standard. As with many of the works in this collection, the works are for those often with interests in particular technologies and those who have particular interests in innovation methodologies.
Open source software (OSS) has something of a reputation for being complex and often full of faults, features that may offset the benefits of being often free and with transparent code. Shozab Khurshid, A. K. Shrivastava, and Javaid Iqbal’s “Fault Prediction Modelling in Open Source Software Under Imperfect Debugging and Change-Point” (Ch. 13) suggests that OSS-based systems lack the staffing to formalize the correcting of mistakes in the code, and many who contribute to the code may lack an understanding of the OSS systems. The correction of prior mistakes may introduce additional ones. In general, fault removal rates are low in open source software. Setting up a framework to predict the number of faults in open source software (and ranking the software by fault metrics) would be useful for those considering possible adoption of software. This approach enables some additional way of assessing possible software systems for adoption. This chapter involves the analysis of eight models for predicting faults in open source software, and these were assessed for their prediction capability based on open-source software datasets. Respective OSS are ranked based on “normalized criteria distance” (p. 277). Users of OSS software report bugs, and if reproducible, the source code is updated and reshared publicly. The developing team comes from the community, and in most cases, these are volunteers. An administrator or team may provide oversight for changes and control access to the core code base. The researchers here test the reliability of OSS by using eight different software reliability growth models (SRGMs). Some important analyzed factors include “change point and imperfect debugging phenomenon” (p. 291). This group found that the Weibull distribution based SRGM gives “the best fault prediction” (p. 291); however, the research involved study of the “time based single release framework” (p. 291), and future studies would benefit from testing for multiple release modeling and for multiple dimensions.
Manuel Kolp, Yves Wautelet, and Samedi Heng’s “Design Patterns for Social Intelligent Agent Architectures Implementation” (Ch. 14) use a social framework where “autonomous agents” are analogically like “actors in human organizations” (p. 294) and interact in multi-agent systems (MAS). Social patterns may be used for building “open, distributed, and evolving software required by today’s business IT applications such as e-business systems, web services, or enterprise knowledge bases” (p. 294). The coauthors observe that “fundamental concepts of MAS are social and intentional rather than object, functional, or implementation-oriented,” and so suggesting that the design of MAS architectures “can be eased by using social patterns” (p. 294). An agent is defined as “a software component situated in some environment that is capable of flexible autonomous action in order to meet its design objective” (Aridor & Lange, 1998, as cited in Kolp, Wautelet, & Heng, 2021, p. 295). Given the abstractions required to manage the elusive constructs of code and code functioning, this method offers a metal way to express the ideas.
The social patterns are two basic ones: the Pair pattern with defined interactions between “negotiating agents” and the Mediation one in which “intermediate agents…help other agents to reach agreement about an exchange of services” (Kolp, Wautelet, & Heng, 2021, p. 300). The social patterns framework is applied to a variety of agent interaction patterns, to enable developers to conceptualize, collaborate, and communicate about the abstract technological functions. The researchers here describe patterns that work vs. anti-patterns that have been shown not to.
Liliana Favre’s “A Framework for Modernizing Non-Mobile Software: A Model-Driven Engineering Approach” (Ch. 15) proposes a method to harness legacy code for the modern mobile age. The framework proposed “allows integrating legacy code with the native behaviors of the different mobile platform through cross-platform languages” (p. 320). This approach enables people to migrate C, C++, and Java to mobile platforms (p. 320), through the Haxe multiplatform language (and compiler) that allows the use of the “same code to deploy an application on multiple platforms” simultaneously (p. 324). From one code-base, various applications and source code for different platforms may be created (p. 324). The author describes the harnessing of model-driven engineering (MDE) as a way to abstract code functionalities, to enable reengineering systems. Their approach is a semi-automatic one to reverse engineer the models in legacy software (p. 321). That information may be used to build out the functionality for mobile or other efforts. This approach may be particularly relevant in the time of the Internet of Things (IoT).
Various other tools help bridge between versions of software. Technological standards serve as metamodels, families of models, so each part of the software can meet particular requirements. A metamodel is “a model that defines the language for expressing a model, i.e. ‘a model of models’. A metamodel is an explicit model of the constructs and rules needed to build specific models. It is a description of all the concepts that can be used in a model” (Favre, 2021, p. 327). This reads to be meticulous and complex work, to bridge between various code and technology systems for functionalities. At play are both reverse engineering and forward engineering, and a deep understanding of how to achieve various representations of code and functions…to enable transitioning to other codes and formats. This work shows the importance of actualizing migrations in more systematic ways instead of ad hoc ones. Favre (2021) writes: “A migration process must be independent of the source and target technologies. In our approach, the intermediate models act as decoupling elements between source and target technologies. The independence is achieved with injectors and, M2M and M2T transformations. Besides in a transformation sequence, models could be an extension point to incorporate new stages” (p. 340). [Note: The M2M refers to “model to model” transformation, and the M2T refers to “model to text” transformation.]
Arun Kumar Sangaiah and Vipul Jain’s “Fusion of Fuzzy Multi-Criteria Decision Making Approaches for Discriminating Risk with Relate (sic) to Software Project Performance: A Prospective Cohort Study” (Ch. 16) suggests the importance of assessing software projects for risk as a consideration for whether and how or if a work should proceed. If a project is high risk, there is often low performance. This team used “fuzzy multi-criteria decision making approaches for building an assessment framework that can be used to evaluate risk in the context of software project performance in (the) following areas: 1) user, 2) requirements, 3) project complexity, 4) planning and control, 5) team, and 6) organizational environment” (p. 346). Theirs is a systematized way to assess relevant factors to ultimately inform decision making, including two approaches: Fuzzy Multi-Criteria Decision Making (FMCDM) and Fuzzy Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS). This work involves the measuring of risks in five dimensions: “requirements, estimations, planning and control, team organization, and project management” (p. 347) and in 22 evaluation criteria, including “ambiguous requirements,” “frequent requirement changes,” “lack of assignment of responsibility,” “lack of skills and experience,” “low morale,” and “lack of data needed to keep objective track of a project” (p. 354). This team applied their model to assess software project risk among 40 projects and identified “risky/confused projects”; they were able to identify 36 projects of the 40 accurately or 92.5% accuracy (pp. 355 - 356).
An organization’s decision to pursue a particular Enterprise Resources Planning (ERP) system is not a small endeavor. So much of an organization’s performance may ride on such technologies. There are complex tools in the marketplace. The price tag for the software is hefty, and the skills required to run and use one is challenging and requiring of many full-time staff positions. Maria Manuela Cruz-Cunha, Joaquim P. Silva, Joaquim José Gonçalves, José António Fernandes, and Paulo Silva Ávila’s “ERP Selection using an AHP-based Decision Support System” (Ch. 17) describe a systematic approach to this decision by considering qualitative and quantitative factors in an Analytic Hierarchy Process (AHP) model. In this model, there are three main moments of application of the technique: “definition of the problem and of the main objective; definition of the tree of criteria (hierarchical structure), with the relative weights for each criterion; evaluation of the alternative solutions, using the defined tree” (p. 377). This work is based off of a solid literature review and multiple early questionnaires including participants from various IT roles. Some of the preliminary findings were intuitive, such as the importance of “user friendliness” as important (p. 384). Experts wanted “guarantees,” “consulting services,” and “customization” (p. 384). That larger organizations “rank ‘payment and financial terms’ and ‘customization’ criteria with higher importance than the smaller ones” (p. 384). The Analytic Hierarchy Process (AHP) involved the evaluating of 28 criteria to weight. Some of the most critical ones included the following: “coverage of the required functionalities / norms / regulations,” “technical support quality,” and “technical team capability” among other considerations (p. 386). Is the ERP easy to upgrade? What does the security look like? Is there access to the source code for tweaks?
Miroslav Škorić’s “Adaptation of Winlink 2000 Emergency Amateur Radio Email Network to a VHF Packet Radio Infrastructure” (Ch. 18) suggests that in a time of emergency, in a scenario of a malfunctioning commercial communications services, the world can connect via email by “interconnecting an existing VHF amateur packet radio infrastructure with ‘Winlink 2000’ radio email network” (p. 392). This way, people may use radio waves to share information across broader geographies. This is the starting premise, and the author describes the setup with dedicated hardware and software setups. Certainly, even in the absence of a disaster, piggybacking various technologies for different enablements has an inherent charm, especially when described with clear directional details, applied expertise, and screenshots.
Alena Buchalcevova’s “Methodology for ISO/IEC 29110 Profile Implementation in EPF Composer” (Ch. 19) shares a case experience out of the Czech Republic. Here, the Eclipse Process Framework Composer is used to create an Entry Profile implementation, based on the ISO/IEC29110 Profile Implementation Methodology. Eclipse is a free and open-source tool for “enterprise architects, programme managers, process engineers, project leads and project managers to implement, deploy and maintain processes of organisations or individual projects” (Tuft, 2010, as cited in Buchalcevova, 2021, p. 425). It provides an organizing structure for technology work, with ways to define tasks and subtasks, respective roles, and other elements. There is an ability to apply an overarching theoretical approach, too, for software process engineering. The system has a predefined schema built in, related to agile.
Leon Sterling, Alex Lopez-Lorca, and Maheswaree Kissoon-Curumsing’s “Adding Emotions to Models in a Viewpoint Modelling Framework From Agent-Oriented Software Engineering: A Case Study With Emergency Alarms” (Ch. 20) demonstrates how emotions may be brought into software design in an applied case of building an emergency alarm system for older people. Modeling out the emotional goals of stakeholders empathically, based on a viewpoint framework, may inform early-phase requirements for software design and ultimately result in a much stronger product better aligned with human needs. An early question here is what is the profile of a potential user of a personal alarm system, like pendants, for access to help during a health emergency? What is the person’s responsibilities, constraints, and emotional goals? The coauthors write: “The older person wants to feel cared about…safe…independent…in touch with their relatives and carers…(and) unburdened of the obligation of routinely get(ting) in touch with their relative/carer” (p. 448). This list then informs how software developers may build out an application to meet the core functional purpose of emergency communications along with the user’s emotional needs. The requirements inform the software design but may have implications for the aesthetics, the interface, the marketing, the sales strategies, and other aspects. For example, one of the requirements is that the “system must be accessible to the older person and invisible to everyone else” (p. 451), because of the potential risk to the pride and dignity of the user, who has to feel empowered and in personal control and independent.
This team writes of their experiences:
They explain their design also through an interaction sequence diagram.
Petr Ivanovich Sosnin’s “Conceptual Experiments in Automated Designing” (Ch. 21) begins with specifying designers’ behavior in solving project work in a conceptual design. These steps are broken down into “behavior units as precedents and pseudo-code programming” as early work to systematize and automate design (p. 479). The Software Intensive Systems (SIS) designer approaches are captured using a survey tool. Also captured in the system are various system dependencies in the workflow. Such systems enable conceptual experimentation. This information informs the “intelligent processing of the solved tasks” and can provide the following components: “a new model of the precedent; a new projection of the existing precedent model; a modified version of the existing model of precedent; a new concept that evolves an ontology of the Experience Base; a modified concept of the ontology” (p. 485). Such setups will enable experimentation about the workability of the design plans based on the pseudo-code and the “understandable and checkable forms” (p. 501). They also enable the running of conceptual experiments.
In Europe, various job recruitment agencies use customer relationship management (CRM) systems to connect job seekers with potential employers. Mobile CRM (mCRM), while used in a majority of the 35 recruitment agencies studied, still is not yet put to full use, according to Tânia Isabel Gregório and Pedro Isaías’ “CRM 2.0 and Mobile CRM: A Framework Proposal and Study in European Recruitment Agencies” (Ch. 22). Effective uses of both CRM 2.0 and mobile CRM may enable heightened personalization of career recruitment efforts and more effective uses of the Social Web and social networking.
Vyron Damasiotis, Panos Fitsilis, and James F. O'Kane’s “Modeling Software Development Process Complexity” (Ch. 23) suggests the importance of software development processes (SDPs) that align with the complexity of modern software. From a literature review, these researchers identify 17 complexity factors, including code size, size of application database, programming language level / generation, use of software development tools, use of software development processes, concurrent hardware development, development for reusability, software portability and platform volatility, required software reliability, completeness of design, detailed architecture risk resolution, development flexibility, “product functional complexity and number of non-functional requirements” (p. 533), software security requirements, “level of technical expertise and level of domain knowledge” (p. 534), and other factors. The complexity elements are integrated into a model in four categories: “organizational technological immaturity, product development constraints, product quality requirements, and software size” (p. 540). By weighting of the complexity factors, the four categories were found to rank in the following descending order: “software size, product quality requirements, product development constraints, and organization technological immaturity (p. 541). The researchers applied their model to five case studies, one management information system, one a geographical information system, several in decision support, and another a general information system, applied in different topics related to financed projects, transportation, water management, healthcare, and work. This work offers the foundational design for a tool to help people manage complex software development projects.
Chitreshh Banerjee, Arpita Banerjee, and Santosh K. Pandey’s “MCOQR (Misuse Case-Oriented Quality Requirements) Metrics Framework” (Ch. 24) strives to create a system to anticipate various forms of malicious cyberattacks and to set up credible defenses, given the complexity of software. Of particular focus are various cases of “misuse” of computer systems, to inform on computer system vulnerabilities. The coauthors explain the scope of the issue: “As per available statistics, it has been estimated that around 90% of security incidents which are reported are due to the various defects and exploits left uncovered, undetected, and unnoticed during the various phases of the software development process” (p. 555).
A vulnerability management life cycle occurs in the following steps: “discover, prioritize assets, assess, report, remediate, (and) verify” (Banerjee, Banerjee, & Pandey, 2021, p. 561).
“Security loopholes” may result in interrupted business, lost data, compromised privacy, loss of intellectual property, and other challenges. Different organizations and IT systems have different threat profiles and potential attack surfaces. The proposed Misuse Case Oriented Quality Requirements (MCOQR) metrics framework provides help in defining security requirements and support toward designing and deploying software (Banerjee, Banerjee, & Pandey, 2021, p. 572). This is a system that can work in alignment with existing threat assessment modeling and assessments.
Bryon Balint’s “Maximizing the Value of Packaged Software Customization: A Nonlinear Model and Simulation” (Ch. 25) focuses on the question of how much or little an organization may want to customize a third-party Enterprise Resource Planning (ERP) system and other software systems. Even if a software package is well chosen for fit with an organization, there may be additional anticipated and unanticipated needs that require additional work. Perhaps the software is modularized, and only particular parts of the tool may be activated based on licensure requirements. This chapter explores the customization decision at organizations. This study involves “modelling nonlinear relationships between the amount of time spent on custom development and the resulting benefits,” “modelling nonlinear relationships between development costs and maintenance costs,” and “modelling corrective development as a function of development related to fit and user acceptance” (p. 580). This work suggests that custom development occurs in four categories: to address gaps in fit, to facilitate user acceptance, to facilitate integration, and to enhance performance (Balint, 2021, pp. 583-584). This information enables simulation techniques to project when a customization approach may provide necessary organizational value and when not and inform managerial decision making. Will a change result in diminishing returns? What are the levels of risk in implementing new code? Is the manager biased towards the upside or the downside?
Rajeshwar Vayyavur’s “Software Engineering for Technological Ecosystems” (Ch. 26) offers a framework for understanding software ecosystems (SECOs), technological spaces for the design of software. The author identifies various architectural challenges for SECOs, including platform openness, technical integration (of which applications), independent platform development, independent application development, qualities, and features for compliant software development (to regulations and standards). He offers in-depth insights about these various challenges and tradeoffs and decision making in each dimension. Various SECOs have to accommodate the needs of different users of the tool, who engage it with different use cases.
Jong-Gyu Hwang and Hyun-Jeong Jo’s “Automatic Static Software Testing Technology for Railway Signaling System” (Ch. 27) describes the harnessing of intelligent systems to ensure that the critical software code that runs railway signaling are validated. The automated testing tool described in this work can test both the software (written in C and C++) and the functioning of the live system. The co-researchers cite the standards that have to be met by the software, from the MISRA-C (Motor Industry Software Reliability Association) coding rules and IEC 61508 and IEC 62279, and other international standards and then Korean ones. This work includes examples an initial “violated form” of code to improved code (p. 624).
Bertrand Verlaine’s “An Analysis of the Agile Theory and Methods in the Light of the Principles of the Value Co-Creation” (Ch. 28) places at the forefront of work collaborations “co-created value” as something achieved between a service provider and a customer. Agile, a theory for managing software implementation projects, aligns with the idea of value co-creation given the focus on customers and their needs. Agile includes critical principles, including that “continuous attention to technical excellence and good design enhances agility” and “simplicity—the art of maximizing the amount of work not done—is essential” (Beck et al., 2001, in Agile Manifesto, as cited in Verlaine, 2021, p. 635). Agile is known for four values: “individuals and the interactions are privileged over processes and tools”; “working software is preferred to comprehensive documentation”; “customer collaboration takes a prominent place instead of contract negotiation”; “responding to change is favoured compared to following a plan” (pp. 634-635).
Agile has spawned various versions, including SCRUM, eXtreme Programming, Rapid Application Development, Dynamic Systems Development Method, Adaptive Software Development, Feature-Driven Development, and Crystal Clear. These agile methods are studied and summarized. Then, they are analyzed for their contribution to value co-creation with customers across a range of factors: “resources and competencies integration, consumer inclusion, interaction-centric, personalization, contextualization, (and) responsibility of all parties” (Verlaine, 2021, p. 646). Here, eXtreme Programming (XP) and Rapid Application Development (RAD) come out well in terms of supporting value co-creation.
Shaifali Madan Arora and Kavita Khanna’s “Block-Based Motion Estimation: Concepts and Challenges” (Ch. 29) focuses on the importance of digital video compression, achieved by removing redundancies. The tradeoffs are between “speed, quality and resource utilization” (p. 651). The popularization of video streaming to mobile devices as well as 3D television has introduced other markets for such technologies and affect the technological requirements. This work reviews the state of the video compression field at present and how this evolved over time.
Kening Zhu’s “From Virtual to Physical Problem Solving in Coding: A Comparison on Various Multi-Modal Coding Tools for Children Using the Framework of Problem Solving” (Ch. 30) uses a research experimentation setup in which children experience one of four designed face-to-face workshops around coding, to better understand what conditions promote their abilities in problem-solving using computational thinking. This study focuses on the lower elementary ages from 7 to 10 years old, an age where logical reasoning is more developmentally available and there are increased efficiencies for the attaining of second languages. (Coding may be seen as other languages of sorts.) This research shared that “graphical input could keep children focused on problem solving better than tangible input, but it was less provocative for class discussion. Tangible output supported better schema construction and casual reasoning and promoted more active class engagement than graphical output but offered less affordance for analogical comparison among problems” (p. 677). The tangibles in this case involved a maze and a robot, which was appealing to young learners and elicited higher levels of engagement (p. 686).
Latina Davis, Maurice Dawson, and Marwan Omar’s “Systems Engineering Concepts with Aid of Virtual Worlds and Open Source Software: Using Technology to Develop Learning Objects and Simulation Environments” (Ch. 31) brings to mind how while various hot technologies, like immersive virtual worlds, move in and out of favor, when they offer particular teaching and learning affordances that fit well, they can be irreplaceably useful. Students in an engineering course use high level systems analysis to inform their design of an ATM to meet customer needs. They go through the phases of a Systems Development Life Cycle (with planning, analysis, design, implementation, and maintenance) (p. 704). They work through various case scenarios of a person going to an ATM machine for service and consider ways to build out a system to meet their needs. They examine required functions, objects, actions, and other necessary components (p. 709). They draw out a sequence diagram, activity diagram (p. 711), and a systems sequence diagram (p. 712), using formal diagrammatic expressions. They also integrate a virtual ATM machine scenario in Second Life, on which they conduct some research about virtual human-embodied avatars and their efficiencies in using the virtual ATM.
Abhishek Pandey and Soumya Banerjee’s “Test Suite Optimization Using Chaotic Firefly Algorithm in Software Testing” (Ch. 32) focuses on the importance of effective auto-created test cases to test software. The cases have to meet a range of criteria, such as “statement coverage, branch coverage” and other factors to be effective for testing (p. 722). Various algorithms may be used to create test cases, and these are pitted against others to which one creates the most useful test data. In this work: “Major research findings are that chaotic firefly algorithm outperforms other bio-inspired algorithm such as artificial bee colony, Ant colony optimization and Genetic Algorithm in terms of Branch coverage in software testing” (p. 722). This team found that the create test cases were fit and resulted in optimized test cases (p. 729). These experiments were performed in MATLAB.
Marco Antônio Amaral Féris’ “QPLAN: A Tool for Enhancing Software Development Project Performance with Customer Involvement” (Ch. 33) strives to integrate best practices for software development in a technology used for project planning. The intuition behind this tool is that software projects would be more successful if the quality of the planning is high and integrates some of the best practices based on research. The QPLAN system offers a 12-step manual approach to software development project planning in a technology system that offers tips along the way. The system enables testing of its functions and structure, using both white box and black box testing. QPLAN has a straightforward interface and a fairly transparent process, based on this chapter. The project success results in work efficiency for the team and “effectiveness, customer satisfaction, and business results” for the customers (p. 755).
Hugo R. Marins and Vania V. Estrela’s “On the Use of Motion Vectors for 2D and 3D Error Concealment in H.264/AVC Video” (Ch. 34) explores the complexities of hiding errors in motion videos using “intra- and inter-frame motion estimates, along with other features such as the integer transform, quantization options, entropy coding possibilities, deblocking filter” and other computational means (p. 765). This capability is “computationally-demanding” in video codecs in general and the H.264/AVC video compression standard and coder/decoder in particular. The coauthors note that there is a lack of standardized performance assessment metrics for error concealment methods and suggest that future research may address this shortcoming (p. 782). This and other works in this anthology showcase the deep complexities behind common technologies that the general public may use without thinking.
Kawal Jeet and Renu Dhir’s “Software Module Clustering Using Bio-Inspired Algorithms” (Ch. 35) proposes a method to automatically cluster software into modules. This tool is conceptualized as being useful to “recover the modularization of the system when source code is the only means available to get information about the system; identify the best possible package to which classes of a java project should be allocated to its actual delivery; combine the classes that could be downloaded together” (p. 789). To achieve the optimal clustering, these researchers go to bio-inspired algorithms: “bat, artificial bee colony, black hole and firefly algorithm” and propose a hybrid of the prior algorithms “with crossover and mutation operators of the genetic algorithm” proposed (p. 788). They tested their system on seven benchmark open-source software systems. They found that their mix was shown to “optimize better than the existing genetic and hill-climbing approaches” (p. 788). Using efficient modularization is especially relevant during the maintenance phase of a software development life cycle.
Liguo Yu’s “Using Kolmogorov Complexity to Study the Coevolution of Header Files and Source Files of C-alike Programs” (Ch. 36) begins with what sounds like an idle question but which turns out to be clever and relevant. The question is whether header and source files co-evolve together during the evolution of an open source software (Apache HTTP web server). Specifically, do C-alike programs [C, C++, Objective-C] show correlation between the header and source files (which is how source code is divided), or do large gaps form? Header files contain “the program structure and interface” and are hypothesized to be “more robust than source files to requirement changes and environment changes” (p. 815). The author writes:
This research resulted in the observation of “significant correlation between header distance and source distance.” More specifically, changes to header and source files correlated where “larger changes to header files indicates larger changes to source files on the same version and at the same time; smaller changes to header files indicates smaller changes to source files on the same version and at the same time, and vice versa” (Yu, 2021, p. 822). Another innovation involved using the Kolmogorov complexity and normalized compression to study software evolution (p. 822).
Ekbal Rashid’s “R4 Model for Case-Based Reasoning and Its Application for Software Fault Prediction” (Ch. 37) describes a model which involves learning from observed examples of faults in software and using that information to engage in software quality prediction (by anticipating other similar faults in other programs), based on various similarity functions and distance measures: Euclidean Distance, Manhattan Distance, Canberra Distance, Clark Distance, and Exponential Distance (pp. 832 – 833). This system predicts the quality of software based on various software parameters: “number of variables, lines of code, number of functions or procedures, difficulty level of software, experience of programmer in years, (and) development time” (p. 838), with a low error rate.
Madhumita Panda and Sujata Dash’s “Automatic Test Data Generation Using Bio-Inspired Algorithms: A Travelogue” (Ch. 38) opens with a summary of metaheuristic algorithms to create test data over the past several decades. The coauthors generate path coverage-based testing using “Cuckoo Search and Gravitational Search algorithms” and compare the results of the prior approach to those created using “Genetic Algorithms, Particle Swarm optimization, Differential Evolution and Artificial Bee Colony algorithm” (p. 848). The topline finding is that “the Cuckoo search algorithm outperforms all other algorithms in generating test data due to its excellent exploration and exploitation capability within less time showing better coverage and in comparatively fewer number(s) of generations” (p. 864).
Takehiro Tsuzaki, Teruaki Yamamoto, Haruaki Tamada, and Akito Monden’s “Scaling Up Software Birthmarks Using Fuzzy Hashing” (Ch. 39) proposes a method for creating “software birthmarks” based on native aspects of the software to enable comparing versions of the software to identify potential software theft (p. 867). This team builds on the original idea by Tamada et al, in 2004, and add feature improvements to current birthmark systems. Their approach involves “transforming birthmarks into short data sequences, and then using the data obtained to compute similarity from a simple algorithm” (p. 870). These hash functions enable heightened efficiencies for the comparisons.
Abhishek Pandey and Soumya Banerjee’s “Bio-Inspired Computational Intelligence and Its Application to Software Testing” (Ch. 40) serves as an effective baseline article on how some bio-inspired algorithms are applied to software testing problems, including “test case generation, test case selection, test case prioritization, test case minimization” (p. 883). The coresearchers offer flowcharts and other setups describing their sequences. Indeed, it is one thing to have available algorithms for various processes, but it is critical to have people with the expertise to harness the technologies for practical purposes.
Anthony Olufemi Tesimi Adeyemi-Ejeye, Geza Koczian, Mohammed Abdulrahman Alreshoodi, Michael C. Parker, and Stuart D. Walker’s “Ultra-High-Definition Video Transmission for Mission-Critical Communication Systems Applications: Challenges and Solutions” (Ch. 41) focuses on can’t-fail surveillance systems in emergency contexts as a base assumption. They present some ways that ultra-high-definition video may be transmitted, stored, and made available (p. 902). They conceptualize the video as compressed or uncompressed, distributed by wired or wireless transmission. They write: “With the evolution and in particular the latest advancements in spatial resolution of video alongside processing and network transmission, ultra-high-definition video transmission is gaining increasing popularity in the multimedia industry” (p. 911). Indeed, the technologies put into place for these capabilities will have implications on non-surveillance video as well, those for storytelling, art, and other purposes.
Solomon Berhe, Steven A. Demurjian, Jaime Pavlich-Mariscal, Rishi Kanth Saripalle, and Alberto De la Rosa Algarín’s “Leveraging UML for Access Control Engineering in a Collaboration on Duty and Adaptive Workflow Model that Extends NIST RBAC” (Ch. 42) focuses on security engineering. They build on a prior work: the formal Collaboration on Duty and Adaptive Workflow (CoD/AWF) model. They leverage the Unified Modeling Language (UML) “to achieve a solution that separates concerns while still providing the means to securely engineer dynamic collaborations for applications” (p. 916). They present their ideas through a role slice diagram (p. 921), a UML-extended role slice diagram for collaboration steps (p. 935), a proposed UML COD/AWF Obligation Slice Diagram (p. 936), a collaboration workflow (p. 937), and other explanatory diagrams.
Fadoua Rehioui and Abdellatif Hair’s “Towards a New Combination and Communication Approach of Software Components” (Ch. 43) reconceptualizes ways to reorganize software components based on a viewpoint approach, of linking software components to particular types of system users, and assigning a Manager Software component to enable communications between the various system components. The effort is to ultimately suggest a pattern “that ensures the combination and communication between software components” (p. 941).
Delroy Chevers, Annette M. Mills, Evan Duggan, and Stanford Moore’s “An Evaluation of Software Development Practices among Small Firms in Developing Countries: A Test of a Simplified Software Process Improvement Model” (Ch. 44) proposes a new approach to software process improvement (SPI) programs for use in developing countries and small firms with limited resources. Such entities have lesser capacity to deal with potential failures. Their approach involves 10 key software development practices, supported by project management technology. Some ideas that inform this tool are that institutionalized quality practices in an organization stand to improve software quality; likewise, people skills can be critical. A study of 112 developer/user dyads from four developing English-speaking Caribbean countries found a positive impact on software product quality (p. 955), based on this version of the tool.
Liguo Yu’s “From Teaching Software Engineering Locally and Globally to Devising an Internationalized Computer Science Curriculum” (Ch. 45) shares learning from the author’s teaching of software engineering to two universities, one in the U.S. and one in the P.R.C. The teaching of “non-technical software engineering skills” in an international curriculum may be challenging given the differences in respective cultures, business environments, and government policies. To achieve effective learning, the professor has to apply flexibility in integrating “common core learning standards and adjustable custom learning standards” (p. 984). The course is built around problem-based learning, which raises particular challenges:
In these scenarios, students take on role-playing positions in particular scenarios and problem-solving with IT solutions (Yu, 2021, pp. 990-991), such as how to reduce healthcare costs (p. 992), enhance community safety (p. 992), promote food safety (p. 994), and others. Part of the learning objective is to have students familiarize themselves with different domains in which IT may be applied (p. 996). The two locations of the students highlights some of the potential geopolitical sensitivities given the great power competition. Perhaps such courses may serve as bridging mechanisms between peoples.
David William Schuster’s “Selection Process for Free Open Source Software” (Ch. 46) shares a systematic approach for how a (public?) library may make a decision about making open source software accessible as part of its holdings. This work brings up issues of eliciting a requirements list from the staff, from the community, and even from the software makers. There are issues of software compatibility with other relevant technologies. There are legalities, technical issues (installation, maintenance, and others), user support, and other considerations. The author also points to considerations into the future, such as maintenance and upgrades.
Kalle Rindell, Sami Hyrynsalmi, and Ville Leppänen’s “Fitting Security into Agile Software Development” (Ch. 47) highlights some of the incompatibilities between the security engineering and the software development process. While security practice should be continuous, it is often “a formal review at a fixed point in time, not a continual process truly incorporated into the software development process” (p. 1029). The researchers offer a simplified iterative security development process with a security overlay at the various phases of software development (p. 1030), including requirement gathering, design, implementation, verification, release, and operations. They cite the importance of keeping of a vulnerability database. They emphasize the importance of security assurance for software.
Mohamed Fawzy Aly and Mahmood A. Mahmood’s “3D Medical Images Compression” (Ch. 48) highlights the importance of enabling efficient and effective compression of these large images for both transfer and storage. In the process, the critical visual information has to be retained for analysis and record-keeping. The coauthors describe the current processes, with limitations.
Anjali Goyal and Neetu Sardana’s “Analytical Study on Bug Triaging Practices” (Ch. 49) explores a range of methods for identifying the most critical bugs to fix and assigning those to a developer, in order to correct the source code. In their summary, they examine various approaches: classifiers, recommender systems, bug repositories, and other technologies and methods. Different bug assignment techniques include the following: “machine learning, information retrieval, tossing graphs, fuzzy set, Euclidean distance, social network based techniques, information extraction, (and) auction based technique” (p. 1079), in stand-alone and combinatorial ways. Over the years, various of the techniques have predominated, and from about 2007 onwards, information retrieval seems to have predominated (vs. machine learning, and vs. machine learning and information retrieval, together) (p. 1082). Various techniques have been used to assess the effectiveness of the bug assignment, including “precision, recall, accuracy, F score, hit ratio, mean reciprocal rank, mean average precision, and top N rank” (p. 1083). At present, even with the many advances of bug report assignment methods, the process is not fully automated. There are various problems, described as “new developer problem, developers switching teams and deficiency in defined benchmarks” (p. 1088). The research survey-based research work culminates in six research questions that may be used to address bug triaging and to help others formulate plans for optimal bug triaging.
Software systems may age out, in a phenomenon termed “smooth degradation” or “chronics,” in Yongquan Yan and Ping Guo’s “Predicting Software Abnormal State by using Classification Algorithm” (Ch. 50) (2021, p. 1095). The appearance of aging is preceded by a long delay even as the phenomenon is occurring. The authors propose a method for detecting software aging: “Firstly, the authors use proposed stepwise forward selection algorithm and stepwise backward selection algorithm to find a proper subset of variables set. Secondly, a classification algorithm is used to model (the) software aging process. Lastly, t-test with k-fold cross validation is used to compare performance of two classification algorithms” (p. 1095). The method is tested in this research and is found effective and efficient.
Zouheyr Tamrabet, Toufik Marir, and Farid Mokhati’s “A Survey on Quality Attributes and Quality Models for Embedded Software” (Ch. 51) provides a summary of the research literature on this topic of how “quality” is conceptualized and actualized and measured and modeled in various works for software embedded in various systems.
Wei Li, Fan Zhao, Peng Ren, and Zheng Xiang’s “A Novel Adaptive Scanning Approach for Effective H.265/HEVC Entropy Coding” (Ch. 52) proposes improvements to the current compression of video in the H.265 format.
Sergey Zykov’s “Software Development Crisis: Human-Related Factors' Influence on Enterprise Agility” (Ch. 53) focuses on the need for both “techno-logical and anthropic-oriented” factors to create effective software. To head off potential misconceptions between customers and developers, Zykov suggests that both technical and soft skills are required, with learned general “architectural” practices of technical and human organization and intercommunications to improve the work—based on lessons learned in other fields, like the nuclear power industry, the oil and gas industry, and others.
Mehmet Gencer and Beyza Oba’s “Taming of ‘Openness' in Software Innovation Systems” (Ch. 54) explores how to balance the “virtues of OSS community while introducing corporate discipline” without driving away the volunteers or contributors to open source software projects (p. 1163). Large-scale OSS flourish in innovative ecosystems conducive to R&D. Such projects are open to feedback from users, who have different needs from the OSS. If openness and creativity are “wild,” perhaps instantiating some order is seen as “taming.” This project involves the study of six community-led cases of OSS with wide usage: Apache, Linux, Eclipse, Mozilla, GCC (GNU Compiler Collection), and Android (p. 1164). This cross-case analysis results in findings of various governance mechanisms that bring some taming to the work. One common approach involves voting based schemes in which developers who are skilled and aligned with the values of the community are upvoted (p. 1169) and so gain collective credibility. Other taming elements in OSS communities include licensing regimes, strategic decision making approaches, organizational structures / leadership structures, quality assurance standards, and others (p. 1172).
Kijpokin Kasemsap’s “Software as a Service, Semantic Web, and Big Data: Theories and Applications” (Ch. 55) strives to stitch together SaaS, the Semantic Web, and Big Data of the title, but this work does not really say more than that these are computational capabilities that are somewhat contemporaneous. The argument is that knowledge of these capabilities are important for organizational performance, but a mere summary without further analysis is not as powerful as this work could be.
Aparna Vegendla, Anh Nguyen Duc, Shang Gao, and Guttorm Sindre’s “A Systematic Mapping Study on Requirements Engineering in Software Ecosystems” (Ch. 56) involves a study of the published research literature on software ecosystems or SECOs. The researchers found that research was “performed on security, performance and testability” but did not include much in the way of “reliability, safety, maintainability, transparency, usability” (p. 1202). This review work suggests that there may be areas that would benefit from further research.
Sergio Galvan-Cruz, Manuel Mora, Rory V. O'Connor, Francisco Acosta, and Francisco Álvarez’s “An Objective Compliance Analysis of Project Management Process in Main Agile Methodologies with the ISO/IEC 29110 Entry Profile” (Ch. 57) identifies gaps in two industrial ASDMs (agile software development methodologies” and the ISO/IEC 29110 Entry Profile…but finds closer adherence with the academic ASDM (UPEDU) which “fits the standard very well but…is scarcely used by VSEs” (or “very small entities” perhaps due to a “knowledge gap” (p. 1227). Such works can provide a helpful word-of-mouth and may have an effect on the uptake of particular project management approaches.
Moutasm Tamimi and Issam Jebreen’s “A Systematic Snapshot of Small Packaged Software Vendors' Enterprises” (Ch. 58) involves work collection over 100 articles about small packaged software vendors’ enterprises (SPSVEs). The systematic search for these articles involved a range of search strings in various databases. The authors used a “systematic snapshot mapping” (SSM) method (p. 1262). They collected the works in a database. They offer some light insights about these enterprises, such as the software lifecycle for SPSVEs.
There are constructive collaborations to be had between those in academia and in industry, particularly in the area of usability and user experience (UX) design. Amber L. Lancaster and Dave Yeats’ “Establishing Academic-Industry Partnerships: A Transdisciplinary Research Model for Distributed Usability Testing” (Ch. 59) describes a constructive experience in which graduate students applied to work as co-investigators in a transdisciplinary exploration of a product’s usability. The team ultimately included users, a research team comprised of usability researchers, technical writers, and IT professionals…and additional stakeholders including “software developers, product managers, legal professionals, and designers” (p. 1293). The work that follows reads as intensive and professional, with the student team following in-depth test protocols and using various design scenarios to elicit feedback from users. This case is used to laud academic-industry partnerships to advance the professional applicability of the curriculum and pedagogy.
Muhammad Salman Raheel, and Raad Raad’s “Streaming Coded Video in P2P Networks (Ch. 60) offers a proposed solution to the delivery of video on peer-to-peer networks even if there are different video coding techniques used on the videos [Scalable Video Coding, Multiple Descriptive Coding, and others] while controlling for playback latency in multimedia streaming and other quality service features (the ability to find relevant contents, service reliability, security threats, and others). What follows is a summary of the current video coding techniques and streaming methods, and their respective strengths and weaknesses.
Veeraporn Siddoo and Noppachai Wongsai’s “Factors Influencing the Adoption of ISO/IEC 29110 in Thai Government Projects: A Case Study” (Ch. 61) focuses on an international process lifecycle standard designed for very small entities (VSEs) (p. 1340). The research team elicited feedback from four Thai government organizations that attained the ISO/IEC 29110 Basic Profile Certification to better understand what contributed to their successful implementation of the standards and then what barriers they faced. They found that the success factors included the following: “supportive organizational policy, staff participation, availability of time and resources for the improvement of the software process, consultations with the SIPA and team commitment and recognition” (p. 1340). The barriers they found include “time constraints, lack of experience, documentation load, unsynchronized means of communication and improper project selection” (p. 1340), although some of the barriers seemed to be local work conditions and work processes (and not related to the standards). This work shows the importance of understanding how a government deploys its resources for ICT and its integration in work. It is also suggestive of the need for further supports if the standards are to be adopted and applied successfully. The research here is qualitative, and this chapter includes some quotes from the test subjects in the study to add insight and human interest.
Swati Dhingra, Mythili Thirugnanam, Poorvi Dodwad, and Meghna Madan’s “Automated Framework for Software Process Model Selection Based on Soft Computing Approach” (Ch. 62) studies factors which affect what process model is used for software development projects to create a rigorous program that meets needs under budget and with the lowest numbers of faults possible, over a software lifespan. This work includes a review of the literature and a survey with respondents representing different professional roles in IT. They use an automated framework for selecting the process model based on an inferential “fuzzy-based rule engine” and a J-48 decision tree considering various factors (p. 1367). Theirs is a model to inform which process model may be most applicable for a particular project and to ultimately inform the work of a project managers and others.
Beatriz Adriana Gomez and Kailash Evans’ “A Practical Application of TrimCloud: Using TrimCloud as an Educational Technology in Developing Countries” (Ch. 63) makes the case of harnessing an open-source virtual desktop infrastructure in developing countries for educational usage, to host software and desktops. In the abstract, the coauthors argue for “refurbished legacy systems as the alternative hardware source for using TrimCloud” (p. 1391). Ironically, it does not seem that TrimCloud exists anymore based on a Google search. There are only a few references to this article.
Priyanka Chandani and Chetna Gupta’s “An Exhaustive Requirement Analysis Approach to Estimate Risk Using Requirement Defect and Execution Flow Dependency for Software Development” (Ch. 64) focuses on how to lower the risks of project failure by conducting a thorough early review of business requirements and required functionalities. Then, too, there should be an assessment of requirement defects, which are a major challenge because “they prevent smooth operation and is (sic) taxing both in terms of tracking and validation”) (p. 1405). Accurate requirements engineering (RE) is often conducted early on in a project. That step should include assessments of technical challenges, path dependencies (and various related “calls” to certain functions), and a cumulative project risk assessment. If possible, there should be risk ratings applied to particular endeavors (based on project requirements).
Bryon Balint’s “To Code or Not to Code: Obtaining Value From the Customization of Packaged Application Software” (Ch. 65) echoes an earlier work with a different title. This work refers to a method for weighing the pros and cons of customizing packaged application software, of various types. Custom developments, according to the cited researcher, are for four basic reasons: “the gap in fit,” support for “user acceptance” (p. 1429), integration, and system performance improvement (p. 1430). The costs of such customizations are non-minimal, for the development and then continuing maintenance (p. 1431). The author models out various dynamics, finding that early fit of the technology to needs lowers the cost of customizations (p. 1432) and that keeping the costs of development down raises the overall value of the system (p. 1432). His model also sets “inflection points” at which as the starting fit increases, the net value of the custom development increases and another point at which as the starting fit increases, the net value of the custom development decreases (p. 1433). Increasing user acceptance benefits the value of the software (p. 1433). The essential ideas are reasonable and intuitive.
Shalin Hai-Jew’s “Creating an Instrument for the Manual Coding and Exploration of Group Selfies on the Social Web” (Ch. 66) was created by the reviewer, so this will not be reviewed here.
The Internet of Things (IoT) and cloud computing and software-defined networking requires new standards to enable smooth functioning, according to Mohit Mathur, Mamta Madan, and Kavita Chaudhary’s “A Satiated Method for Cloud Traffic Classification in Software Defined Network Environment” (Ch. 67). This work explores a method to mark cloud traffic to enable prioritization using DSCP of IP header (p. 1509) based on a differentiated services architecture (p. 1511).
Pankaj Kamthan’s “On the Nature of Collaborations in Agile Software Engineering Course Projects” (Ch. 68) focuses on the different types of learning collaborations in course projects in software engineering education. An important part of the skillset involves soft skills. This work describes different collaboration patterns in the learning space: student-student, team-teaching assistant, team-teacher, team-internal representative (like capstone projects, with examiners), and team-external representative (like capstone projects, with a “customer”) (p. 1540). These projects cover a variety of hands-on and experiential learning based on problem-solving with innovations and technical knowledge. This is a well-presented and substantive work.
James Austin Cowling and Wendy K. Ivins’ “Assessing the Potential Improvement an Open Systems Development Perspective Could Offer to the Software Evolution Paradigm” (Ch. 69) asks how software evolution may be improved and be more responsive to client needs. The coauthors find power in three “divergent” methodologies: Plan-Driven, Agile, and Open Source Software Development (p. 1553). An open source approach entails stakeholders who collaborate around a shared broadscale endeavor. There are arbitration factors that enable governance and shared decision making. There are software artefacts deployed to stakeholders’ environments to “ensure ongoing system viability” (p. 1560). While Plan-driven or Agile methods are deployed, there is a “focus on quality and fitness-for-purpose” based on exploration of customer needs which is absent in open-source endeavors (p. 1563). Open source software has to provide value to its community even as there is “lack of definition, prediction and monitoring of a likely return on investment,” which makes this approach “a significant challenge” for adoption in a corporate setting (p. 1563). What is beneficial to planned and agile methods may be “delivery measurement practices, refinement of agreements in principle into requirements, and open engagement across a wide stakeholder community” (p. 1563).
Software engineering is a global process, with employees hailing from different locations. Tabata Pérez Rentería y Hernández, and Nicola Marsden’s “Offshore Software Testing in the Automotive Industry: A Case Study” (Ch. 70) explores the experiences of testers in India working for an automotive supplier to a Germany company. Their mixed method study of the testers’ experiences included semi-structured interviews. These researchers found that “manual testing was a boring activity when done over a period of time” (p. 1581), especially among more experienced testers. Many testers felt that they did not receive the level of respect or recognition that developers do (p. 1583). They wanted more time allotted for automated testing (p. 1584). Also, the researchers found that “sharing equipment was a frequent problem that testers face. Testers have to hunt for equipment either among other testers or developers” (p. 1584). Companies do well to promote the well-being of their employees in all locations.
Gary Wong, Shan Jiang, and Runzhi Kong’s “Computational Thinking and Multifaceted Skills: A Qualitative Study in Primary Schools” (Ch. 71) involved a study held at two primary schools in Hong Kong to study the efficacy of teaching computational thinking through visual programming tools to children. The qualitative research includes “classroom observations, field notes and group interviews” and also a “child-centered interview protocol to find out the perception of children in learning how to code” (p. 1592), such as whether or not they felt the process helped their problem-solving and creativity. The researchers share their pedagogical design framework, their teaching methods, their research protocols, and their findings, in a methodical and reasoned work.
An important management function involves striving to achieve greater work efficiencies, accuracy, and productivity. George Leal Jamil and Rodrigo Almeida de Oliveira’s “Impact Assessment of Policies and Practices for Agile Software Process Improvement: An Approach Using Dynamic Simulation Systems and Six Sigma” (Ch. 72) proposes the use of computer simulation models for evaluating software quality improvement. Their approach is based on using Six Sigma (6 σ) methodology to find areas to improve work processes. Their test of the simulated model was shown to have measurable benefits: “The earnings with the new version of the case exceed by more than 50% the Sigma level, the quality of software developed, and reduction of more than 55% of the time of development of the project” (p. 1616). Given global competition, companies must necessary use every edge to improve.
Yves Wautelet, Christophe Schinckus, and Manuel Kolp’s “Agent-Based Software Engineering, Paradigm Shift, or Research Program Evolution” (Ch. 73) suggests that an over-use of programming concepts and “not…organizational and human ones” can lead to “ontological and semantic gaps between the (information) systems and their environments.” To rectify this issue, they suggest that the use of multi-agent systems may help realign information systems with the people who use them, by “offering modeling tools based on organizational concepts (actors, agents, goals, objectives, responsibilities, social dependencies, etc.) as fundamentals to conceive systems through all the development process” (p. 1642). In this approach, the agent has autonomy, functions in a particular situation, and has designed flexibility in terms of actions (p. 1645).
Ezekiel U. Okike’s “Computer Science and Prison Education” (Ch. 74) proposes that national governments in developing countries should institute computer science as part of prison education, so inmates may achieve gainful employment when they leave incarceration, and so they may reacclimate to societies who have computer technologies integrated into so many facets. “Computer science” is defined as “the study of computers and computational systems” (p. 1656), with the pragmatic aspects emphasized here. There can be problem solving methods applied using CS, and there are various available career paths in firms of all sizes. The work continues by exploring various aspects of computer science and identifies how this knowledge, skills, and abilities in this space may benefit those in prison by enabling them to reform and acquire work. Some effective programs at various prisons are highlighted.
Chen Zhang, Judith C. Simon, Euntae “Ted” Lee’s “An Empirical Investigation of Decision Making in IT-Related Dilemmas: Impact of Positive and Negative Consequence Information” (Ch. 75) uses a vignette-based survey to better understand individual decision making and intentions in regards to IT security and privacy. Of particular interest is the “deterrent role of information about possible negative consequence in these situations” (p. 1671). The researchers observe that deterrent information influence “is greater in situations involving software products than in situations involving data and for individuals with a higher level of fundamental concern for the welfare of others” (p. 1671). Relevant information can be consequential in informing human behaviors related to information technologies although these behaviors are also informed by “individual factors and situational factors” (p. 1671). Those with more idealistic ethics were more responsive than those with relative ones. Also, these researchers found that information about negative consequence was more motivating than information about positive consequence (p. 1685).
Michael D'Rosario’s “Intellectual Property Regulation, and Software Piracy, a Predictive Model” (Ch. 76) found that using a multilayer perceptron model to analyze IP piracy behaviors in the aftermath of IP regulations (IPRs) was better at predicting outcomes than other modeling methods. The data are focused on ASEAN member countries and a review of a dataset of various IP laws and observations of IP infringements (in WTO cases).
The author uses a three-layer multilayer perceptron model (MLP) artificial neural network (ANN) “with an input layer deriving from the variables provided by Shadlen (2005). Software is the variable denoting the rate of software piracy. Bilateral Investment denoted the level of advantage afforded through any bilateral investment treaty. WTO Case is a dummy variable pertaining to the existence of a case under review in the international courts relating to an intellectual property dispute, respectively. The U.S. 301 is available denoting inclusion within a USTR 301 report. Trade dependence is the critical trade relationship variable, accounting for the trade dependence of the ASEAN member country and the US and Canada” (D’Rosario, 2021, p. 1697). Their model was able to predict the rate of piracy at “100 percent, across the ASEAN panel” (p. 1699), better than regression models when the focus is on outcome prediction (p. 1701).
Theodor Wyeld’s “Using Video Tutorials to Learn Maya 3D for Creative Outcomes: A Case Study in Increasing Student Satisfaction by Reducing Cognitive Load” (Ch. 77) describes a transition from front-of-the-classroom teaching demonstrations of software to the uses of custom-generated video tutorials, based on Mayer and Moreno’s theory of multimedia learning (2003). They found that university students ranked their satisfaction higher with video tutorials because of a sense of reduced cognitive load, to learn Maya 3D with step-by-step directions. This work also includes the use of a PDF Tutorial to have open while doing particular procedural assignments in Maya 3D, which many consider a fairly complex software program. The benefit of tutorial videos is replicated in other teaching and learning contexts as well.
D. Jeya Mala’s “Investigating the Effect of Sensitivity and Severity Analysis on Fault Proneness in Open Source Software” (Ch. 78) notes the criticality of identifying (particular high-impact) faults in open source software. Some faults require dynamic code analysis to identify “as some of the components seem to be normal but still have higher level of impact on the other components” (p. 1743). This study focuses on “how sensitive a component is and how severe will be the impact of it on other components in the system” if it malfunctions (p. 1743). The author has designed a tool to apply a “criticality index of each component by means of sensitivity and severity analysis using the static design matrix and dynamic source code metrics” (p. 1743).
Thamer Al-Rousan and Hasan Abualese’s “The Importance of Process Improvement in Web-Based Projects” (Ch. 79) explores how well software process improvement models may apply in web-based projects spaces for smaller companies. Is there a fit? Is there room for benefit? These questions are being asked in a context of high failure rates for the application of such process improvement efforts and a reluctance to take these methods on given “complex structure and difficult implementation methods” (p. 1770). Resistance (political, cultural, goals, and change management) may exist in the organization (p. 1774). Various models are explored for their suitability for application in the described context albeit without one that can tick on the boxes at present.
Roman Bauer, Lukas Breitwieser, Alberto Di Meglio, Leonard Johard, Marcus Kaiser, Marco Manca, Manuel Mazzara, Fons Rademakers, Max Talanov, and Alexander Dmitrievich Tchitchigin’s “The BioDynaMo Project: Experience Report” (Ch. 80) focuses on the affordances of scientific investigations using computer simulations, which are powered now by high performance computing and hybrid cloud capabilities (which enables scaling). These simulations may be run to answer particular scientific questions. Setting up such research often requires interdisciplinarity.
Misha Kakkar, Sarika Jain, Abhay Bansal, and P.S. Grover’s “Combining Data Preprocessing Methods with Imputation Techniques for Software Defect Prediction” (Ch. 81) involves a study to find the “best-suited imputation technique for handling missing values” in a Software Defect Prediction model (p. 1792). The researchers test five machine learning algorithms for model development for software defect prediction from the (incomplete) data, and then, these models are tested for performance. This team found that “linear regression” with correlation based feature selector results in the most accurate imputed values (p. 1792).
Mary-Luz Sanchez-Gordon’s “Getting the Best out of People in Small Software Companies: ISO/IEC 29110 and ISO 10018 Standards” (Ch. 82) suggest that human factors are critical on software development in smaller firms. This chapter provides “a holistic view of human factors on software process” in Very Small Entities (VSEs), in a software process defined in ISO/IEC 29110. The author proposes an “enhanced implementation of ISO/IEC 29110 standard based on ISO 10018” (p. 1812), to consider human factors, which inform issues of “communication, responsibility and authority, awareness, education and learning, and recognition and rewards” (pp. 1822 – 1823). The system is a three tiered one. At the basic first level, managers need to be “better listeners” and encourage “open communication”; they need to “establish mechanisms for recognition and rewards” (p. 1824). In the second level, they can work “on education and learning, responsibility and authority, and teamwork and collaboration” (p. 1824). On the third level, managers can “keep on developing the factors of attitude and motivation, engagement, empowerment” and then work on “networking, engagement and creativity and innovation” (p. 1825). There is an assumption that earlier levels need to be achieved satisfactorily before advancing to higher ones because of dependencies.
Rory V. O'Connor and Claude Y. Laporte’s “The Evolution of the ISO/IEC 29110 Set of Standards and Guides” (Ch. 83) tries to remediate the reluctance of small organizations to adopt software and systems engineering standards, often because such systems are seen to be created for larger organization with more staffing and resources. The coauthors offer a historical view of the development of the ISO/IEC 29110 standards and related components. Rationale for the development of this standard was to “assist very small companies in adopting the standards” (p. 1831). The chapter offers clear explanations in text, flowcharts, and diagrams, and it serves as a bridge to the resource for VSEs.
If knowledge transfer is a basis for critical competitive advantage for small and medium-sized enterprises (SMEs), how are they supposed to capture such tacit knowledge and retain it for applied usage, especially from e-commerce software projects? Kung Wang, Hsin Chang Lu, Rich C. Lee, and Shu-Yu Yeh’s “Knowledge Transfer, Knowledge-Based Resources, and Capabilities in E-Commerce Software Projects” (Ch. 84) tackles the prior question and aims for their chapter to serve as “a clear guide to project managers in their team building and recruiting” (p. 1856). This research is based on real-world case studies from primary research.
Another work also addresses how to address software security in agile software development. Ronald Jabangwe, Kati Kuusinen, Klaus R Riisom, Martin S Hubel, Hasan M Alradhi, and Niels Bonde Nielsen’s “Challenges and Solutions for Addressing Software Security in Agile Software Development: A Literature Review and Rigor and Relevance Assessment” (Ch. 85) provides a literature review on this topic and offers that “there are ongoing efforts to integrate security-practices in agile methods” (p. 1875).
Developers play a critical role in software development, and as people, they experience emotions as individuals and as groups. Md Rakibul Islam and Minhaz F. Zibran’s “Exploration and Exploitation of Developers' Sentimental Variations in Software Engineering” (Ch. 86) shares an empirical study of “the emotional variations in different types of development activities (e.g., bug-fixing tasks), development periods (i.e., days and times), and in projects of different sizes involving teams of variant sizes” and also strove to look at the impacts of emotions on “commit comments” (p. 1889). They explore ways to exploit human emotion awareness to improve “task assignments and collaborations” (p. 1889). Another pattern that they found: “…emotional scores (positive, negative and cumulative) for energy-aware commit messages are much higher than those in commit messages for four other tasks” (bug fixing, new feature, refactoring, and security-related) (p. 1896). Commit messages which are posted during the implementation of new features and security-related tasks “show more negative emotions than positive ones. Opposite observations are evident for commit messages for three other types of tasks” (bug-fixing, energy-aware, and refactoring) (p. 1897).
The authors add:
In their work, they took efforts to establish construct validity and reliability.
If software making were a factory, how should the managers measure productivity, and then align the organization to productively make software to high standards? So ask Pedro S. Castañeda Vargas and David Mauricio in “A Review of Literature About Models and Factors of Productivity in the Software Factory” (Ch. 87). A review of the literature from 2005 – 2017 resulted in the identification of 74 factors (related to programming, analysis, and design and testing) and 10 models (p. 1911). This systematic study found that most factors related to productivity of software making related to programming. There are some statistical techniques for measuring software productivity but fall short because while many offer a function, “they do not lead to a formula…only mentioning several factors to take into account in the measurement” (p. 1929). This work suggests that there is more to be done in this space. [One cannot help but think that managers have some informal ways of assessing productivity, assuming they have full information.]
Anjali Goyal and Neetu Sardana’s “Bug Handling in Service Sector Software” (Ch. 88) provides a summary of the software life cycle and the criticality of identifying bugs and mitigating them throughout, or risk “serious financial consequences” (p. 1941). This work focuses on various bug handling approaches to the technologies. Ideally, the higher risk bugs with potential severe outcomes are addressed as quickly as possible, while controlling against unintended potential risks from the fix. There are challenges with misidentification of bugs, a “heavy flow of reported bugs,” and other challenges (p. 1954). Little insights are offered about the technologies used in the service sector, however, the title notwithstanding.
Nikhil Kumar Marriwala, Om Prakash Sahu, and Anil Vohra’s “Secure Baseband Techniques for Generic Transceiver Architecture for Software-Defined Radio” (Ch. 89) takes on the challenge of how to set up an effective software-defined radio (SDR) system that can handle corrupted signals through “forward error-correcting (FEC) codes” (p. 1961), in the absence of central standards. The worked problem here involves having an effective architecture for hardware and software. SDR systems are used for testing systems, collaboration, military-based radio communications, and international connectivity (p. 1964).
Amir Manzoor’s “Contemporary Energy Management Systems and Future Prospects” (Ch. 90) focuses on the transition from a more traditional electric grid to a smart one (with energy management systems) potentially resulting in increased energy efficiency. The data are not only for personal households but also organizations, businesses, industries, and other scales of entities. The thinking is that heightened awareness of energy consumption patterns enables people to better control their energy consumption. This metering and notification enablement requires advanced analytics and ICT. Such an approach is compelling in a time of heightened awareness of energy consumption, mass-scale anthropomorphic effects on the larger environment, and the closing window during which humanity may head off an environmental catastrophe (by reducing carbon emissions, by capturing carbon, and other efforts). This work explores standards applied to these technologies and includes explorations of various brand name products in the space.
Muhammad Wasim Bhatti and Ali Ahsan’s “Effective Communication among Globally Distributed Software Development Teams: Development of an ‘Effective Communication’ Scale” (Ch. 91) suggests that international collaborations benefit from best practices, which include four factors: “stakeholders’ involvement, acculturation, usage of appropriate tools and technology, and information available,” for effective communication (p. 2014) and a related scale, based on a review of the literature and a survey of those working in global software development (p. 2033).
Prantosh Kumar Paul’s “The Context of IST for Solid Information Retrieval and Infrastructure Building: Study of Developing Country” (Ch. 92) focuses on the importance of academic innovation, to benefit the economic development of developing countries. Such countries have gaps between “industrial needs and the availability of skilled labor” (p. 2040), and if human resources may be built up, they can advance society and human well-being. Information Sciences and Computing is seen as an important area in which to build human capital, particularly in the areas of “Cloud Computing, Green Computing, Green Systems, Big-Data Science, Internet, Business Analytics, and Business Intelligence” (p. 2040). Paul conducts a “strengths, weaknesses, opportunities, threats” (SWOT) analysis of the academic offerings in this area and generalizes various approaches based on a review of the literature. This work shows various modes of teaching and learning and various pedagogical approaches.
Dermott John James McMeel and Robert Amor’s “Knitting Patterns: Managing Design Complexity with Computation” (Ch. 93) focuses on intercommunications between proprietary software systems related to design, to extend various capabilities. The team works to “leverage emergent device and data ecosystems” to “knit” devices and services (p. 2055), so that proprietary software programs are not closed systems unto themselves, in the design and construction process. In a design context, software programs serve various “participant interactions, which includes designers, sub-contractors and project managers operating within smaller areas of the overall project” (McMeel & Amor, 2011, as cited in McMeel & Amor, 2021, p. 2056). They describe this shift in the field:
Information management within design continues to gather importance. This is perhaps best reflected in the shift from the acronym CAD (Computer Aided Design) to BIM (Building Information Model), where geometry is only part of the representation of buildings. In Autodesk’s Revit building information modeling software—for example—you cannot simply move your building from one point to another, there is information present about its relationships with other components that prevent particular translations or movements. Within this context computation becomes a useful asset for managing rules and behaviors that are desirable, if not always in the fore when decision-making. (McMeel & Amor, 2021, p. 2058)
They describe several cases:
This work is achieved with plug-ins and other tools.
A book of the size of Research Anthology on Recent Trends, Tools, and Implications of Computer Programming (4 volumes), with some 90+ chapters, is not the most conducive to review. The works read a little as a grab bag of topics, with insights both for general readers and specialists. This work was collated by an editorial team within the publishing house, and the works have all been published prior in other texts by the publisher. Regardless of the number of works, this collection leaves a lot of ground uncovered in terms of software and programming and so on, given the breadth of application of computer programming in the world.
Research Anthology on Recent Trends, Tools, and Implications of Computer Programming (4 volumes)
Information Resources Management Association
IGI Global
2021
2069 pp.
To harness the power of computers, people learn about computational thinking. They study how they can take their own subject matter expertise in various fields and collaborate with computer scientists and programmers to actualize programs for research, data visualization, teaching and learning, among other endeavors. The human-computer collaboration is an important one to advance fields. In the public mind, perhaps, coding is for everyone in a democratized sense. Even preschoolers through those in elder adulthood are training in computational thinking.
Many others are learning how to code directly, so they can make their own lightweight programs and scripts to augment publicly available software programs. Some may study more deeply to contribute to larger-scale projects. A few may make their own games or applications, which they may share on various online stores or make available on websites. So many public facing services now go with a “no-code” model, enabling people to sequence what they want to achieve without needing to know how to program. Those who want to see the underlying code of their programs may view the code levels. Regardless, understanding something of how computers work and ways to speak to computers with various high-level languages and others benefit the learner. Without programming literacy, people may miss much of the digital parallel world and rich enablements.
Research Anthology on Recent Trends, Tools, and Implications of Computer Programming (4 volumes) is a compilation of academic works by the Information Resources Management Association of IGI Global. This work comes in at a whopping 2069 pp. and is comprised of 93 chapters that were published in other books originally with this publisher. A reader comes away from this work with the sense that it is a more elite group that actually develops the software professionally who are most influential, and perhaps some who work on large-scale open-source projects. And then there is the rest of everyone else at the shallow end. This collection offers deeper insights about the state of computer programming and macro-level trends.
3G/4G/LTE-A
This collection opens with Surajit Deka and Kandarpa Kumar Sarma’s “Joint Source Channel Coding and Diversity Techniques for 3G/4G/LTE-A: A Review of Current Trends and Technologies” (Ch. 1), a theoretically rich work about practical ways to enable efficient and stable network communications based on different encoding approaches. This chapter posits that a joint source channel coding techniques approach with multiple inputs and multiple outputs can be optimal for such communications networks, given constraints. In this work they propose a particular architecture, through various equations, diagrams, and descriptions.
Software Project Ontologies
Various studies and anecdotal observations suggest a high failure rate of software projects. Creating software is complex, and there are many requirements for the technologies. In the past decades, various endeavors have been tried to smooth the process of software development, including ontologies to “improve knowledge management, software and artifacts reusability, internal consistency within project management processes of various phases of software life cycle,” according to Omiros Iatrellis and Panos Fitsilis’s “A Review on Software Project Management Ontologies” (Ch. 2) (2021, p. 27). Their review of the literature results in the finding of the “lack of standardization in terminology and concepts, lack of systematic domain modeling and use of ontologies mainly in prototype ontology systems that address rather limited aspects of software project management processes” and other limitations in the space (p. 27). The coauthors observe that the main purpose of software engineering “is to identify precisely what are the repeatable and reusable procedures in software development, and to support, regulate and automate as many as possible while leaving as little as possible for mental-intensive work” (p. 28). Some of the methods include recognizable approaches like “Waterfall, Prototyping, RAD (Rapid Application Development), Incremental, Spiral, UP (Unified Process), XP (Extreme Programming), Scrum, etc.” (p. 28). While the goal is to improve software developer productivity, code quality, software functionality, effective collaboration, as few bugs as possible, and helpful documentation, this is complex work with both high intrinsic and germane cognitive load demands. This work provides a brief history of the various methods and their evolutions over time but is more secondhand and theoretical than direct and applied insights. What actually works in Software Project Management (SPM) may depend on local conditions.
Digital Visual Effects to Meet Artist Needs
If there are typical paths for the development of software, Jan Kruse’s “Artist-Driven Software Development Framework for Visual Effects Studios” (Ch. 3) suggests that some approaches are more unusual than others. Over the past three decades, the film industry (from indie to mainstream commercial filmmakers) and particularly the visual effects studios have informed software development and its commercialization. The film industry requires effective visual effects and invest a fair amount of the producer’s budget into such technologies and the skills to wield them for the entire visual effects pipeline: pre-production to the final rendering. Some movies require proportionally much more in the way of visual effects as a percentage of the overall film, at present.
The author describes the Artist-Driven Software Development Framework as visual effects studios requiring particular visual effects and applying developers to that work. From those direct experiences, Kruse suggests that the innovations may be integrated into the software tool and applied to commercial software programs to the benefit of all stakeholders. Perhaps there would be a net positive if “a visual effects studio publish(es) proprietary tools as soon as possible and in close cooperation with an existing software company” to further extend the designed solution and to increase market acceptance of the tool and approach (Kruse, 2021, p. 58). The visual effects studio may benefit from being known for fx innovations (p. 59), even as they’re giving away IP. The general sequence of this artist-driven framework is the following: research (in-house), prototype (in-house), and product (external) (p. 62).
One real-world example involves Deep Compositing and other technological innovations and fx special effects. There are heightened efficiencies, so that the look-and-feel of a shot may be changed in a percentage of the time required for a “full re-render” (p. 57). The researcher writes:
The market is very competitive and if artists are able to contribute some cutting edge ideas to the project, this will increase their chances of getting hired for future projects. This implies that the creative design process has been expanded by the visual effects industry as well. The process includes not just the application of techniques and technology, but virtually creates some of the tools that are necessary to even apply the design process at all. A parallel could be drawn to a painter who starts a project and realizes that a specific brush is missing in his toolkit and is also not available for purchase anywhere. Instead of finding a brush-maker to produce that new tool, he starts finding the right hair, handle and ferrule, and makes a completely new, unique brush himself. This approach and thinking effectively enables him to attempt any new project, even if it is seemingly impossible to finish due to the initial lack of the right tools. (Kruse, 2021, p. 51)
In terms of in-house development of such innovations, apparently, only a few players in the space have that capability. The examples discussed in the chapter are with companies with about 1,000 employees that can engage (p. 61). The talent sets are expensive and perhaps rare.
Figure 1. Digital Tree
Costing Out Software Development Projects
Edilaine Rodrigues Soares and Fernando Hadad Zaidan’s “Composition of the Financial Logistic Costs of the IT Organizations Linked to the Financial Market: Financial Indicators of the Software Development Project” (Ch. 4) identifies a range of financial indicators from software development projects. These include both fixed and variable costs. These include salaries and other inputs. Their variables then include anticipated return on investment (ROI), such as anticipated sales and other elements (p. 74). These are integrated into an equation comprised of the following: gross revenue, sum of expenses (fixed and variable), percent of profitability, and percentage of taxes per emitted invoice (p. 78). For IT organizations to be attractive to investors, competitive in the marketplace, and an integral part of the supply chain, the calculating of the “financial logistic costs of the information management of the software development project in the IT organizations” may be an important approach (p. 85). The method described in this work projects costs a month out.
Modeling Behind Software Development
Janis Osis and Erika Nazaruka (Asnina)’s “Theory Driven Modeling as the Core of Software Development” (Ch. 5) describes the current state of software development as “requirements-based with chaotic analysis” (p. 93). Further: “The four most expensive software and activities (in decreasing order) are: finding and fixing bugs, creating paper documents, coding, and meetings and discussions” (p. 90). So much of programmer time is spent fixing bad code. Software projects are rife with “budget and schedule overruns” (p. 89). Software engineering is “in permanent crisis” (p. 91). Overall, software development is “primitive” and resistant to “formal theories” (p. 90).
While various models of software have not necessarily been proved all that useful, the coauthors propose a Model Driven Architecture (MDA), with “architectural separation of concerns” for formalizing software creation as an engineering discipline. MDA uses “formal languages and mechanisms for description of models” [Osis & Nazaruka (Asnina), 2021, p. 92]. The innovation proposed involves bringing mathematical accuracy into the very initial steps of software development and throughout the other stages beyond requirements gathering, including analysis, low-level design, coding, testing, and deployment. The model proposed here includes Topological Functioning Model, which uses “lightweight” mathematics and include “concepts of connectedness, closure, neighborhood and continuous mapping” (p. 98). Using TFM to define both the “solution domain” and the “problem domain” to inform the software development requirements (p. 100) may benefit the practice of software design and control some of the complexities.
Building out the Cloud
Richard Ehrhardt’s “Cloud Build Methodology” (Ch. 6) reads as an early work when people were first entering the cloud space and learning the fundamentals: differentiating between public, private, and hybrid clouds; understanding the various services provided via cloud (IaaS, PaaS, SaaS, DaaS, and even XaaS, referring to infrastructure, platform, software, desktop, and “anything”…as a service); and conceptualizing cost drivers in building out a cloud. This researcher describes “Anything as a Service” (XaaS) as “the extension of cloud services outside of purely infrastructure or software based services” (p. 112) and may include service requests for “data centre floor space or even a network cable” (p. 112). Ehrhardt describes a componentized “data centre, infrastructure layer, virtualization layer, orchestration and automation, authentication, interface, operational support services, and business support services” (p. 112). Some of the provided information reads as dated, such as the metering of services (vs. the costing out of services described here), the cloud provider professionals who help set up cloud services (vs. the sense of customers having to go-it-alone), and so on.
Figure 2. Artificial Greenscape
Running Software Testing
Abhishek Pandey and Soumya Banerjee’s “Test Suite Minimization in Regression Testing Using Hybrid Approach of ACO and GA” (Ch. 7) begins with the challenge of identifying “a minimum set of test cases which covers all the statements in a minimum time” and prioritizing them for testing for optimal chances of detecting faults in the code base (p. 133). Software testing can be time consuming, labor- and resource-intensive, and requiring sophisticated analytical skills and meticulous attention to details. Software developer attention is costly and in high demand and short supply. To aid in the software testing effort, various algorithms are applied to identify potential challenges. Regression testing is common in the “maintenance phase of the software development life cycle” (p. 134). In this work, the researchers use “a hybrid approach of ant colony optimization algorithm and genetic algorithm” (p. 133); they strive for metaheuristics. Various methods are assessed and analyzed for performance and “fitness” through statistical means.
Software Development in the Cloud
Chhabi Rani Panigrahi, Rajib Mall, and Bibudhendu Pati’s “Software Development Methodology for Cloud Computing and Its Impact” (Ch. 8) points at a number of benefits of developing software in the cloud, given the customizable environment there, the ease of group collaboration, the speed to deployment, the ability to harness other enterprise solutions, and the ability to scale the effort (p. 156). This work involves evaluation of some of the cloud computing programming models (such as “MapReduce, BSPCloud, All-pairs, SAGA, Dryad, and Transformer” and their respective pros and cons for programming in the cloud (p. 162). Cloud computing “allows parallel processing; provides fault tolerant functionality; supports heterogeneity; (and) takes care of load balancing” (p. 163). It enables organizations to capture user feedback and implement changes more quickly. The public cloud has limits and is not advised for “systems that require extreme availability, need real-time computational capabilities or handle sensitive information” (p. 169). This work describes the application of agile development, often with lean teams of 5 to 9 people. As the software evolves through the various stages—requirements gathering, analysis, design, construction, testing, and maintenance—changes to the software at each phase becomes both more costly and complex (p. 153).
Interdisciplinary Design Teams
Jeni Paay, Leon Sterling, Sonja Pedell, Frank Vetere, and Steve Howard’s “Interdisciplinary Design Teams Translating Ethnographic Field Data Into Design Models: Communicating Ambiguous Concepts Using Quality Goals” (Ch. 9) describes the challenges of using complex ethnographic data to inform design models. They use “cultural probes” (as a data collection technique) to learn about “intimate and personal aspects of people’s lives” (Gaver et al., 1999, as cited in Paay, Sterling, Pedell, Vetere, & Howard, 2021, p. 174) as related to cultural aspects of personal and social identities. On collaborative projects, there is the importance of having “a shared understanding between ethnographers, interaction designers, and software engineers” (p. 173). This team suggests the importance of having defined quality goals in system modeling (p. 173). They suggest the power in maintaining “multiple, competing and divergent interpretations of a system” and integrating these multiple interpretations into a solution (Sengers & Gaver, 2006, as cited in Paay, Sterling, Pedell, Vetere, & Howard, 2021, p. 183). They describe the application of social and emotional aspects to the design of socio-technical systems. Their Secret Touch system enables connectivity between various agents in multi-agent systems. This particular system includes four: “Device Handler, Intimacy Handler, Partner Handler, and Resource Handler” (p. 191), informed by how couples and other groups interact to inform the design of technical systems (p. 195).
Studying Agile Methodologies for Impact
Nancy A. Bonner, Nisha Kulangara, Sridhar Nerur, and James. T. C. Teng’s “An Empirical Investigation of the Perceived Benefits of Agile Methodologies Using an Innovation-Theoretical Model” (Ch. 10) explores Agile Software Development (ASD), in particular, to see if such approaches promote constructive and innovative work. Agile development is about “evolutionary development and process flexibility” (p. 208), two software development practices that the team suggest would be effective in mitigating some of the complexities of software development projects. Evolutionary development, a “cornerstone of agile development”), is found to benefit software developer work (p. 202, but “process flexibility” as not having an impact on “complexity, compatibility, and relative advantage” (p. 202), based on research and empirical data. [Agile software development is generally known as a method which brings together lean cross-functional teams and “advocates adaptive planning, evolutionary development, early delivery, and continual improvement, and it encourages flexible responses to change,” according to Wikipedia.]
This work studies two dimensions of development process agility: “evolutionary development and process flexibility,” which are thought to have effects on developer adoption (Bonner, Kulangara, Nerur, & Teng, 2021, p. 207) . At the heart of the research is a survey with responses from a heterogeneous random sample of international professionals in IT. Some findings, at a statistical level of significance, include that Evolutionary Development is “negatively related to perceived complexity of the development methodology” but not so for “process flexibility” (p. 220). Development methodology is seen as advantageous with less perceived complexity (p. 220). Also: “evolutionary development” is related to “perceived compatibility of the development methodology” (p. 220) but not “process flexibility.” The researchers found support for the idea that “Evolutionary Development of the development methodology will be positively related to perceived relative advantage of using the development methodology” but not for “process flexibility” (p. 220).
Fractal-Based Video Compression
Shailesh D. Kamble, Nileshsingh V. Thakur, and Preeti R. Bajaj’s “Fractal Coding Based Video Compression Using Weighted Finite Automata” (Ch. 11) describes how video is often compressed based on temporal redundancies (changes between frames over time) and spatial redundancies (among proxemic or neighboring pixels). Various methods have been proposed for video compression based on performance evaluation parameters including “encoding time, decoding time, compression ratio, compression percentage, bits per pixel and Peak Signal to Noise Ratio (PSNR)” (p. 232). They propose a method of fractal coding “using the weighted finite automata” (WFA) because “it behaves like the Fractal Coding (FC). WFA represents an image based on the idea of fractal that the image has self-similarity in itself” (p. 232); both approaches involve the partitioning of images into parts and observing for differences against a core visual. They tested their approach based on standard uncompressed video databases (including canonical ones like “Traffic, Paris, Bus, Akiyo, Mobile, Suzie” and then also on videos “Geometry” and “Circle” (p. 232) to enable the observation of performance on different digital video contents. By itself, fractal compression is a lossy compression technique, and the addition of weighted finite automata (WFA) may lessen the lossiness. The experimental setup, using MATLAB, involves assessing the speed of the processing, the relative file sizes, and the quality of the reconstructed videos. The coresearchers write:
WFA and FC coding approach for the fractal based video compression is proposed. Though the initial number of states is 256 for every frame of all the types of videos, but we got the different number of states for different frames and it is quite obvious due to minimality of constructed WFA for respective frame. In WFA coding, the encoding time is reduced by 75% to 80% in comparison with simple FC. (Kamble, Thakur, & Bajaj, 2021, p. 249)
They also found better visual quality “where different colors exist.” Specifically:
…if more number of regions exists in the frame of videos then the reconstructed frame quality is good. If we segment the frame of videos and less number of regions exist then the scope to have the better reconstructed frame after applying proposed approach is less. Therefore, the proposed approach is more suitable for the videos where frame consists of more number of regions. (Kamble, Thakur, & Bajaj, 2021, p. 249)
They did observe some problems of artifacts in the reconstructed videos.
Figure 3. Digital Pine
High Efficiency Video Coding … Matching Block Features
Meifeng Liu, Guoyun Zhong, Yueshun He, Kai Zhong, Hongmao Chen, and Mingliang Gao’s “Fast HEVC Inter-Prediction Algorithm Based on Matching Block Features” (Ch. 12) proposes “a fast inter-prediction algorithm based on matching block features” with advantages in speed, coding time, and improvement in the peak signal-to-noise ratio” (p. 253). “HEVC” represents “High Efficiency Video Coding,” an international video compression standard. As with many of the works in this collection, the works are for those often with interests in particular technologies and those who have particular interests in innovation methodologies.
Fault Prediction Modeling for Open Source Software (OSS)
Open source software (OSS) has something of a reputation for being complex and often full of faults, features that may offset the benefits of being often free and with transparent code. Shozab Khurshid, A. K. Shrivastava, and Javaid Iqbal’s “Fault Prediction Modelling in Open Source Software Under Imperfect Debugging and Change-Point” (Ch. 13) suggests that OSS-based systems lack the staffing to formalize the correcting of mistakes in the code, and many who contribute to the code may lack an understanding of the OSS systems. The correction of prior mistakes may introduce additional ones. In general, fault removal rates are low in open source software. Setting up a framework to predict the number of faults in open source software (and ranking the software by fault metrics) would be useful for those considering possible adoption of software. This approach enables some additional way of assessing possible software systems for adoption. This chapter involves the analysis of eight models for predicting faults in open source software, and these were assessed for their prediction capability based on open-source software datasets. Respective OSS are ranked based on “normalized criteria distance” (p. 277). Users of OSS software report bugs, and if reproducible, the source code is updated and reshared publicly. The developing team comes from the community, and in most cases, these are volunteers. An administrator or team may provide oversight for changes and control access to the core code base. The researchers here test the reliability of OSS by using eight different software reliability growth models (SRGMs). Some important analyzed factors include “change point and imperfect debugging phenomenon” (p. 291). This group found that the Weibull distribution based SRGM gives “the best fault prediction” (p. 291); however, the research involved study of the “time based single release framework” (p. 291), and future studies would benefit from testing for multiple release modeling and for multiple dimensions.
Implementing Social Intelligent Agent Architectures
Manuel Kolp, Yves Wautelet, and Samedi Heng’s “Design Patterns for Social Intelligent Agent Architectures Implementation” (Ch. 14) use a social framework where “autonomous agents” are analogically like “actors in human organizations” (p. 294) and interact in multi-agent systems (MAS). Social patterns may be used for building “open, distributed, and evolving software required by today’s business IT applications such as e-business systems, web services, or enterprise knowledge bases” (p. 294). The coauthors observe that “fundamental concepts of MAS are social and intentional rather than object, functional, or implementation-oriented,” and so suggesting that the design of MAS architectures “can be eased by using social patterns” (p. 294). An agent is defined as “a software component situated in some environment that is capable of flexible autonomous action in order to meet its design objective” (Aridor & Lange, 1998, as cited in Kolp, Wautelet, & Heng, 2021, p. 295). Given the abstractions required to manage the elusive constructs of code and code functioning, this method offers a metal way to express the ideas.
The social patterns are two basic ones: the Pair pattern with defined interactions between “negotiating agents” and the Mediation one in which “intermediate agents…help other agents to reach agreement about an exchange of services” (Kolp, Wautelet, & Heng, 2021, p. 300). The social patterns framework is applied to a variety of agent interaction patterns, to enable developers to conceptualize, collaborate, and communicate about the abstract technological functions. The researchers here describe patterns that work vs. anti-patterns that have been shown not to.
Modernizing Non-Mobile Software to Enable Uses of Some Legacy Code in 4IR and IoT
Liliana Favre’s “A Framework for Modernizing Non-Mobile Software: A Model-Driven Engineering Approach” (Ch. 15) proposes a method to harness legacy code for the modern mobile age. The framework proposed “allows integrating legacy code with the native behaviors of the different mobile platform through cross-platform languages” (p. 320). This approach enables people to migrate C, C++, and Java to mobile platforms (p. 320), through the Haxe multiplatform language (and compiler) that allows the use of the “same code to deploy an application on multiple platforms” simultaneously (p. 324). From one code-base, various applications and source code for different platforms may be created (p. 324). The author describes the harnessing of model-driven engineering (MDE) as a way to abstract code functionalities, to enable reengineering systems. Their approach is a semi-automatic one to reverse engineer the models in legacy software (p. 321). That information may be used to build out the functionality for mobile or other efforts. This approach may be particularly relevant in the time of the Internet of Things (IoT).
Various other tools help bridge between versions of software. Technological standards serve as metamodels, families of models, so each part of the software can meet particular requirements. A metamodel is “a model that defines the language for expressing a model, i.e. ‘a model of models’. A metamodel is an explicit model of the constructs and rules needed to build specific models. It is a description of all the concepts that can be used in a model” (Favre, 2021, p. 327). This reads to be meticulous and complex work, to bridge between various code and technology systems for functionalities. At play are both reverse engineering and forward engineering, and a deep understanding of how to achieve various representations of code and functions…to enable transitioning to other codes and formats. This work shows the importance of actualizing migrations in more systematic ways instead of ad hoc ones. Favre (2021) writes: “A migration process must be independent of the source and target technologies. In our approach, the intermediate models act as decoupling elements between source and target technologies. The independence is achieved with injectors and, M2M and M2T transformations. Besides in a transformation sequence, models could be an extension point to incorporate new stages” (p. 340). [Note: The M2M refers to “model to model” transformation, and the M2T refers to “model to text” transformation.]
Fuzzy Logic Applied to Decision Making Under Risk
Arun Kumar Sangaiah and Vipul Jain’s “Fusion of Fuzzy Multi-Criteria Decision Making Approaches for Discriminating Risk with Relate (sic) to Software Project Performance: A Prospective Cohort Study” (Ch. 16) suggests the importance of assessing software projects for risk as a consideration for whether and how or if a work should proceed. If a project is high risk, there is often low performance. This team used “fuzzy multi-criteria decision making approaches for building an assessment framework that can be used to evaluate risk in the context of software project performance in (the) following areas: 1) user, 2) requirements, 3) project complexity, 4) planning and control, 5) team, and 6) organizational environment” (p. 346). Theirs is a systematized way to assess relevant factors to ultimately inform decision making, including two approaches: Fuzzy Multi-Criteria Decision Making (FMCDM) and Fuzzy Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS). This work involves the measuring of risks in five dimensions: “requirements, estimations, planning and control, team organization, and project management” (p. 347) and in 22 evaluation criteria, including “ambiguous requirements,” “frequent requirement changes,” “lack of assignment of responsibility,” “lack of skills and experience,” “low morale,” and “lack of data needed to keep objective track of a project” (p. 354). This team applied their model to assess software project risk among 40 projects and identified “risky/confused projects”; they were able to identify 36 projects of the 40 accurately or 92.5% accuracy (pp. 355 - 356).
A Systematic Method for ERP Selection
An organization’s decision to pursue a particular Enterprise Resources Planning (ERP) system is not a small endeavor. So much of an organization’s performance may ride on such technologies. There are complex tools in the marketplace. The price tag for the software is hefty, and the skills required to run and use one is challenging and requiring of many full-time staff positions. Maria Manuela Cruz-Cunha, Joaquim P. Silva, Joaquim José Gonçalves, José António Fernandes, and Paulo Silva Ávila’s “ERP Selection using an AHP-based Decision Support System” (Ch. 17) describe a systematic approach to this decision by considering qualitative and quantitative factors in an Analytic Hierarchy Process (AHP) model. In this model, there are three main moments of application of the technique: “definition of the problem and of the main objective; definition of the tree of criteria (hierarchical structure), with the relative weights for each criterion; evaluation of the alternative solutions, using the defined tree” (p. 377). This work is based off of a solid literature review and multiple early questionnaires including participants from various IT roles. Some of the preliminary findings were intuitive, such as the importance of “user friendliness” as important (p. 384). Experts wanted “guarantees,” “consulting services,” and “customization” (p. 384). That larger organizations “rank ‘payment and financial terms’ and ‘customization’ criteria with higher importance than the smaller ones” (p. 384). The Analytic Hierarchy Process (AHP) involved the evaluating of 28 criteria to weight. Some of the most critical ones included the following: “coverage of the required functionalities / norms / regulations,” “technical support quality,” and “technical team capability” among other considerations (p. 386). Is the ERP easy to upgrade? What does the security look like? Is there access to the source code for tweaks?
Emergency Email through Amateur Radio Setup
Miroslav Škorić’s “Adaptation of Winlink 2000 Emergency Amateur Radio Email Network to a VHF Packet Radio Infrastructure” (Ch. 18) suggests that in a time of emergency, in a scenario of a malfunctioning commercial communications services, the world can connect via email by “interconnecting an existing VHF amateur packet radio infrastructure with ‘Winlink 2000’ radio email network” (p. 392). This way, people may use radio waves to share information across broader geographies. This is the starting premise, and the author describes the setup with dedicated hardware and software setups. Certainly, even in the absence of a disaster, piggybacking various technologies for different enablements has an inherent charm, especially when described with clear directional details, applied expertise, and screenshots.
ISO/IEC 29110 Profile Implementation
Alena Buchalcevova’s “Methodology for ISO/IEC 29110 Profile Implementation in EPF Composer” (Ch. 19) shares a case experience out of the Czech Republic. Here, the Eclipse Process Framework Composer is used to create an Entry Profile implementation, based on the ISO/IEC29110 Profile Implementation Methodology. Eclipse is a free and open-source tool for “enterprise architects, programme managers, process engineers, project leads and project managers to implement, deploy and maintain processes of organisations or individual projects” (Tuft, 2010, as cited in Buchalcevova, 2021, p. 425). It provides an organizing structure for technology work, with ways to define tasks and subtasks, respective roles, and other elements. There is an ability to apply an overarching theoretical approach, too, for software process engineering. The system has a predefined schema built in, related to agile.
Integrating Emotions to Viewpoint Modeling
Leon Sterling, Alex Lopez-Lorca, and Maheswaree Kissoon-Curumsing’s “Adding Emotions to Models in a Viewpoint Modelling Framework From Agent-Oriented Software Engineering: A Case Study With Emergency Alarms” (Ch. 20) demonstrates how emotions may be brought into software design in an applied case of building an emergency alarm system for older people. Modeling out the emotional goals of stakeholders empathically, based on a viewpoint framework, may inform early-phase requirements for software design and ultimately result in a much stronger product better aligned with human needs. An early question here is what is the profile of a potential user of a personal alarm system, like pendants, for access to help during a health emergency? What is the person’s responsibilities, constraints, and emotional goals? The coauthors write: “The older person wants to feel cared about…safe…independent…in touch with their relatives and carers…(and) unburdened of the obligation of routinely get(ting) in touch with their relative/carer” (p. 448). This list then informs how software developers may build out an application to meet the core functional purpose of emergency communications along with the user’s emotional needs. The requirements inform the software design but may have implications for the aesthetics, the interface, the marketing, the sales strategies, and other aspects. For example, one of the requirements is that the “system must be accessible to the older person and invisible to everyone else” (p. 451), because of the potential risk to the pride and dignity of the user, who has to feel empowered and in personal control and independent.
This team writes of their experiences:
While constructing the models for the emergency alarm system described in this paper, we found that the models lend themselves to a seamless transition between layers. The concepts used in higher abstraction layers are easily transformed into equivalent ones at lower abstraction layers, i.e. closer to implementation. For instance, the role model, defined in the conceptual domain modelling layer, provides basic information about the different roles involved in the system, human or otherwise, and highlights their responsibilities. The equivalent constructs one layer lower, are agents, which aggregate roles according to their capabilities as concrete agent types. At this stage, the agent model still includes both human and man-made agents. However, in the lowest level of abstraction, the platform-dependent layer, only the man-made agents are codified as software entities. (Sterling, Lopez-Lorca, & Kissoon-Curumsing, 2021, p. 460)
They explain their design also through an interaction sequence diagram.
Conceptual Experimentation
Petr Ivanovich Sosnin’s “Conceptual Experiments in Automated Designing” (Ch. 21) begins with specifying designers’ behavior in solving project work in a conceptual design. These steps are broken down into “behavior units as precedents and pseudo-code programming” as early work to systematize and automate design (p. 479). The Software Intensive Systems (SIS) designer approaches are captured using a survey tool. Also captured in the system are various system dependencies in the workflow. Such systems enable conceptual experimentation. This information informs the “intelligent processing of the solved tasks” and can provide the following components: “a new model of the precedent; a new projection of the existing precedent model; a modified version of the existing model of precedent; a new concept that evolves an ontology of the Experience Base; a modified concept of the ontology” (p. 485). Such setups will enable experimentation about the workability of the design plans based on the pseudo-code and the “understandable and checkable forms” (p. 501). They also enable the running of conceptual experiments.
Mobile Customer Relationship Management for Employment Recruitment
In Europe, various job recruitment agencies use customer relationship management (CRM) systems to connect job seekers with potential employers. Mobile CRM (mCRM), while used in a majority of the 35 recruitment agencies studied, still is not yet put to full use, according to Tânia Isabel Gregório and Pedro Isaías’ “CRM 2.0 and Mobile CRM: A Framework Proposal and Study in European Recruitment Agencies” (Ch. 22). Effective uses of both CRM 2.0 and mobile CRM may enable heightened personalization of career recruitment efforts and more effective uses of the Social Web and social networking.
About Software Development
Vyron Damasiotis, Panos Fitsilis, and James F. O'Kane’s “Modeling Software Development Process Complexity” (Ch. 23) suggests the importance of software development processes (SDPs) that align with the complexity of modern software. From a literature review, these researchers identify 17 complexity factors, including code size, size of application database, programming language level / generation, use of software development tools, use of software development processes, concurrent hardware development, development for reusability, software portability and platform volatility, required software reliability, completeness of design, detailed architecture risk resolution, development flexibility, “product functional complexity and number of non-functional requirements” (p. 533), software security requirements, “level of technical expertise and level of domain knowledge” (p. 534), and other factors. The complexity elements are integrated into a model in four categories: “organizational technological immaturity, product development constraints, product quality requirements, and software size” (p. 540). By weighting of the complexity factors, the four categories were found to rank in the following descending order: “software size, product quality requirements, product development constraints, and organization technological immaturity (p. 541). The researchers applied their model to five case studies, one management information system, one a geographical information system, several in decision support, and another a general information system, applied in different topics related to financed projects, transportation, water management, healthcare, and work. This work offers the foundational design for a tool to help people manage complex software development projects.
Measuring System Misuse Cases to Inform on Cybersecurity Practices
Chitreshh Banerjee, Arpita Banerjee, and Santosh K. Pandey’s “MCOQR (Misuse Case-Oriented Quality Requirements) Metrics Framework” (Ch. 24) strives to create a system to anticipate various forms of malicious cyberattacks and to set up credible defenses, given the complexity of software. Of particular focus are various cases of “misuse” of computer systems, to inform on computer system vulnerabilities. The coauthors explain the scope of the issue: “As per available statistics, it has been estimated that around 90% of security incidents which are reported are due to the various defects and exploits left uncovered, undetected, and unnoticed during the various phases of the software development process” (p. 555).
Secure software “cannot be intentionally undermined or forced to fail; remains correct and predictable despite…the fact that intentional efforts could be made to compromise the dependability; continues operating correctly in the presence of most attacks; isolates, contains, and limits the damage which could result…due to any failures; is attack-resistant, attack tolerant and attack resilient” (McGraw, 2006, as cited in Banerjee, Banerjee, & Pandey, 2021, p. 557).
A vulnerability management life cycle occurs in the following steps: “discover, prioritize assets, assess, report, remediate, (and) verify” (Banerjee, Banerjee, & Pandey, 2021, p. 561).
“Security loopholes” may result in interrupted business, lost data, compromised privacy, loss of intellectual property, and other challenges. Different organizations and IT systems have different threat profiles and potential attack surfaces. The proposed Misuse Case Oriented Quality Requirements (MCOQR) metrics framework provides help in defining security requirements and support toward designing and deploying software (Banerjee, Banerjee, & Pandey, 2021, p. 572). This is a system that can work in alignment with existing threat assessment modeling and assessments.
Extending the Power of Packaged Software Customizations
Bryon Balint’s “Maximizing the Value of Packaged Software Customization: A Nonlinear Model and Simulation” (Ch. 25) focuses on the question of how much or little an organization may want to customize a third-party Enterprise Resource Planning (ERP) system and other software systems. Even if a software package is well chosen for fit with an organization, there may be additional anticipated and unanticipated needs that require additional work. Perhaps the software is modularized, and only particular parts of the tool may be activated based on licensure requirements. This chapter explores the customization decision at organizations. This study involves “modelling nonlinear relationships between the amount of time spent on custom development and the resulting benefits,” “modelling nonlinear relationships between development costs and maintenance costs,” and “modelling corrective development as a function of development related to fit and user acceptance” (p. 580). This work suggests that custom development occurs in four categories: to address gaps in fit, to facilitate user acceptance, to facilitate integration, and to enhance performance (Balint, 2021, pp. 583-584). This information enables simulation techniques to project when a customization approach may provide necessary organizational value and when not and inform managerial decision making. Will a change result in diminishing returns? What are the levels of risk in implementing new code? Is the manager biased towards the upside or the downside?
Engineering Technological Ecosystems
Rajeshwar Vayyavur’s “Software Engineering for Technological Ecosystems” (Ch. 26) offers a framework for understanding software ecosystems (SECOs), technological spaces for the design of software. The author identifies various architectural challenges for SECOs, including platform openness, technical integration (of which applications), independent platform development, independent application development, qualities, and features for compliant software development (to regulations and standards). He offers in-depth insights about these various challenges and tradeoffs and decision making in each dimension. Various SECOs have to accommodate the needs of different users of the tool, who engage it with different use cases.
Testing Railway Signaling Systems
Jong-Gyu Hwang and Hyun-Jeong Jo’s “Automatic Static Software Testing Technology for Railway Signaling System” (Ch. 27) describes the harnessing of intelligent systems to ensure that the critical software code that runs railway signaling are validated. The automated testing tool described in this work can test both the software (written in C and C++) and the functioning of the live system. The co-researchers cite the standards that have to be met by the software, from the MISRA-C (Motor Industry Software Reliability Association) coding rules and IEC 61508 and IEC 62279, and other international standards and then Korean ones. This work includes examples an initial “violated form” of code to improved code (p. 624).
Agile and Value Co-Creation
Bertrand Verlaine’s “An Analysis of the Agile Theory and Methods in the Light of the Principles of the Value Co-Creation” (Ch. 28) places at the forefront of work collaborations “co-created value” as something achieved between a service provider and a customer. Agile, a theory for managing software implementation projects, aligns with the idea of value co-creation given the focus on customers and their needs. Agile includes critical principles, including that “continuous attention to technical excellence and good design enhances agility” and “simplicity—the art of maximizing the amount of work not done—is essential” (Beck et al., 2001, in Agile Manifesto, as cited in Verlaine, 2021, p. 635). Agile is known for four values: “individuals and the interactions are privileged over processes and tools”; “working software is preferred to comprehensive documentation”; “customer collaboration takes a prominent place instead of contract negotiation”; “responding to change is favoured compared to following a plan” (pp. 634-635).
Agile has spawned various versions, including SCRUM, eXtreme Programming, Rapid Application Development, Dynamic Systems Development Method, Adaptive Software Development, Feature-Driven Development, and Crystal Clear. These agile methods are studied and summarized. Then, they are analyzed for their contribution to value co-creation with customers across a range of factors: “resources and competencies integration, consumer inclusion, interaction-centric, personalization, contextualization, (and) responsibility of all parties” (Verlaine, 2021, p. 646). Here, eXtreme Programming (XP) and Rapid Application Development (RAD) come out well in terms of supporting value co-creation.
Motion Estimation in Video
Shaifali Madan Arora and Kavita Khanna’s “Block-Based Motion Estimation: Concepts and Challenges” (Ch. 29) focuses on the importance of digital video compression, achieved by removing redundancies. The tradeoffs are between “speed, quality and resource utilization” (p. 651). The popularization of video streaming to mobile devices as well as 3D television has introduced other markets for such technologies and affect the technological requirements. This work reviews the state of the video compression field at present and how this evolved over time.
Problem-Solving with Multimodal Coding Tools for Children
Kening Zhu’s “From Virtual to Physical Problem Solving in Coding: A Comparison on Various Multi-Modal Coding Tools for Children Using the Framework of Problem Solving” (Ch. 30) uses a research experimentation setup in which children experience one of four designed face-to-face workshops around coding, to better understand what conditions promote their abilities in problem-solving using computational thinking. This study focuses on the lower elementary ages from 7 to 10 years old, an age where logical reasoning is more developmentally available and there are increased efficiencies for the attaining of second languages. (Coding may be seen as other languages of sorts.) This research shared that “graphical input could keep children focused on problem solving better than tangible input, but it was less provocative for class discussion. Tangible output supported better schema construction and casual reasoning and promoted more active class engagement than graphical output but offered less affordance for analogical comparison among problems” (p. 677). The tangibles in this case involved a maze and a robot, which was appealing to young learners and elicited higher levels of engagement (p. 686).
Systems Engineering in Virtual Worlds and Open Source Software
Latina Davis, Maurice Dawson, and Marwan Omar’s “Systems Engineering Concepts with Aid of Virtual Worlds and Open Source Software: Using Technology to Develop Learning Objects and Simulation Environments” (Ch. 31) brings to mind how while various hot technologies, like immersive virtual worlds, move in and out of favor, when they offer particular teaching and learning affordances that fit well, they can be irreplaceably useful. Students in an engineering course use high level systems analysis to inform their design of an ATM to meet customer needs. They go through the phases of a Systems Development Life Cycle (with planning, analysis, design, implementation, and maintenance) (p. 704). They work through various case scenarios of a person going to an ATM machine for service and consider ways to build out a system to meet their needs. They examine required functions, objects, actions, and other necessary components (p. 709). They draw out a sequence diagram, activity diagram (p. 711), and a systems sequence diagram (p. 712), using formal diagrammatic expressions. They also integrate a virtual ATM machine scenario in Second Life, on which they conduct some research about virtual human-embodied avatars and their efficiencies in using the virtual ATM.
Chaotic Firefly Algorithm for Software Testing
Abhishek Pandey and Soumya Banerjee’s “Test Suite Optimization Using Chaotic Firefly Algorithm in Software Testing” (Ch. 32) focuses on the importance of effective auto-created test cases to test software. The cases have to meet a range of criteria, such as “statement coverage, branch coverage” and other factors to be effective for testing (p. 722). Various algorithms may be used to create test cases, and these are pitted against others to which one creates the most useful test data. In this work: “Major research findings are that chaotic firefly algorithm outperforms other bio-inspired algorithm such as artificial bee colony, Ant colony optimization and Genetic Algorithm in terms of Branch coverage in software testing” (p. 722). This team found that the create test cases were fit and resulted in optimized test cases (p. 729). These experiments were performed in MATLAB.
Figure 4. Digital Tree at Night
Customer-Informed Software Development
Marco Antônio Amaral Féris’ “QPLAN: A Tool for Enhancing Software Development Project Performance with Customer Involvement” (Ch. 33) strives to integrate best practices for software development in a technology used for project planning. The intuition behind this tool is that software projects would be more successful if the quality of the planning is high and integrates some of the best practices based on research. The QPLAN system offers a 12-step manual approach to software development project planning in a technology system that offers tips along the way. The system enables testing of its functions and structure, using both white box and black box testing. QPLAN has a straightforward interface and a fairly transparent process, based on this chapter. The project success results in work efficiency for the team and “effectiveness, customer satisfaction, and business results” for the customers (p. 755).
Motion Vectors for Error Concealment in 2D and 3D Video
Hugo R. Marins and Vania V. Estrela’s “On the Use of Motion Vectors for 2D and 3D Error Concealment in H.264/AVC Video” (Ch. 34) explores the complexities of hiding errors in motion videos using “intra- and inter-frame motion estimates, along with other features such as the integer transform, quantization options, entropy coding possibilities, deblocking filter” and other computational means (p. 765). This capability is “computationally-demanding” in video codecs in general and the H.264/AVC video compression standard and coder/decoder in particular. The coauthors note that there is a lack of standardized performance assessment metrics for error concealment methods and suggest that future research may address this shortcoming (p. 782). This and other works in this anthology showcase the deep complexities behind common technologies that the general public may use without thinking.
Clustering Software Modules
Kawal Jeet and Renu Dhir’s “Software Module Clustering Using Bio-Inspired Algorithms” (Ch. 35) proposes a method to automatically cluster software into modules. This tool is conceptualized as being useful to “recover the modularization of the system when source code is the only means available to get information about the system; identify the best possible package to which classes of a java project should be allocated to its actual delivery; combine the classes that could be downloaded together” (p. 789). To achieve the optimal clustering, these researchers go to bio-inspired algorithms: “bat, artificial bee colony, black hole and firefly algorithm” and propose a hybrid of the prior algorithms “with crossover and mutation operators of the genetic algorithm” proposed (p. 788). They tested their system on seven benchmark open-source software systems. They found that their mix was shown to “optimize better than the existing genetic and hill-climbing approaches” (p. 788). Using efficient modularization is especially relevant during the maintenance phase of a software development life cycle.
Mapping Software Evolution
Liguo Yu’s “Using Kolmogorov Complexity to Study the Coevolution of Header Files and Source Files of C-alike Programs” (Ch. 36) begins with what sounds like an idle question but which turns out to be clever and relevant. The question is whether header and source files co-evolve together during the evolution of an open source software (Apache HTTP web server). Specifically, do C-alike programs [C, C++, Objective-C] show correlation between the header and source files (which is how source code is divided), or do large gaps form? Header files contain “the program structure and interface” and are hypothesized to be “more robust than source files to requirement changes and environment changes” (p. 815). The author writes:
During the software evolution process, both these two kinds of files need to adapt to changing requirement(s) and changing environment(s). This paper studies the coevolution of header files and source files of C-alike programs” to measure the “header file difference and source file difference between versions of an evolving software product” to understand the “difference in (the) pace of evolution. (Yu, 2021, p. 814)
This research resulted in the observation of “significant correlation between header distance and source distance.” More specifically, changes to header and source files correlated where “larger changes to header files indicates larger changes to source files on the same version and at the same time; smaller changes to header files indicates smaller changes to source files on the same version and at the same time, and vice versa” (Yu, 2021, p. 822). Another innovation involved using the Kolmogorov complexity and normalized compression to study software evolution (p. 822).
R4 Model for Software Fault Prediction
Ekbal Rashid’s “R4 Model for Case-Based Reasoning and Its Application for Software Fault Prediction” (Ch. 37) describes a model which involves learning from observed examples of faults in software and using that information to engage in software quality prediction (by anticipating other similar faults in other programs), based on various similarity functions and distance measures: Euclidean Distance, Manhattan Distance, Canberra Distance, Clark Distance, and Exponential Distance (pp. 832 – 833). This system predicts the quality of software based on various software parameters: “number of variables, lines of code, number of functions or procedures, difficulty level of software, experience of programmer in years, (and) development time” (p. 838), with a low error rate.
Automatic Test Data Generation
Madhumita Panda and Sujata Dash’s “Automatic Test Data Generation Using Bio-Inspired Algorithms: A Travelogue” (Ch. 38) opens with a summary of metaheuristic algorithms to create test data over the past several decades. The coauthors generate path coverage-based testing using “Cuckoo Search and Gravitational Search algorithms” and compare the results of the prior approach to those created using “Genetic Algorithms, Particle Swarm optimization, Differential Evolution and Artificial Bee Colony algorithm” (p. 848). The topline finding is that “the Cuckoo search algorithm outperforms all other algorithms in generating test data due to its excellent exploration and exploitation capability within less time showing better coverage and in comparatively fewer number(s) of generations” (p. 864).
Software Birthmarks
Takehiro Tsuzaki, Teruaki Yamamoto, Haruaki Tamada, and Akito Monden’s “Scaling Up Software Birthmarks Using Fuzzy Hashing” (Ch. 39) proposes a method for creating “software birthmarks” based on native aspects of the software to enable comparing versions of the software to identify potential software theft (p. 867). This team builds on the original idea by Tamada et al, in 2004, and add feature improvements to current birthmark systems. Their approach involves “transforming birthmarks into short data sequences, and then using the data obtained to compute similarity from a simple algorithm” (p. 870). These hash functions enable heightened efficiencies for the comparisons.
Going to Bio-Inspired Algorithms for Software Testing
Abhishek Pandey and Soumya Banerjee’s “Bio-Inspired Computational Intelligence and Its Application to Software Testing” (Ch. 40) serves as an effective baseline article on how some bio-inspired algorithms are applied to software testing problems, including “test case generation, test case selection, test case prioritization, test case minimization” (p. 883). The coresearchers offer flowcharts and other setups describing their sequences. Indeed, it is one thing to have available algorithms for various processes, but it is critical to have people with the expertise to harness the technologies for practical purposes.
Mission Critical Communications
Anthony Olufemi Tesimi Adeyemi-Ejeye, Geza Koczian, Mohammed Abdulrahman Alreshoodi, Michael C. Parker, and Stuart D. Walker’s “Ultra-High-Definition Video Transmission for Mission-Critical Communication Systems Applications: Challenges and Solutions” (Ch. 41) focuses on can’t-fail surveillance systems in emergency contexts as a base assumption. They present some ways that ultra-high-definition video may be transmitted, stored, and made available (p. 902). They conceptualize the video as compressed or uncompressed, distributed by wired or wireless transmission. They write: “With the evolution and in particular the latest advancements in spatial resolution of video alongside processing and network transmission, ultra-high-definition video transmission is gaining increasing popularity in the multimedia industry” (p. 911). Indeed, the technologies put into place for these capabilities will have implications on non-surveillance video as well, those for storytelling, art, and other purposes.
Access Control Engineering
Solomon Berhe, Steven A. Demurjian, Jaime Pavlich-Mariscal, Rishi Kanth Saripalle, and Alberto De la Rosa Algarín’s “Leveraging UML for Access Control Engineering in a Collaboration on Duty and Adaptive Workflow Model that Extends NIST RBAC” (Ch. 42) focuses on security engineering. They build on a prior work: the formal Collaboration on Duty and Adaptive Workflow (CoD/AWF) model. They leverage the Unified Modeling Language (UML) “to achieve a solution that separates concerns while still providing the means to securely engineer dynamic collaborations for applications” (p. 916). They present their ideas through a role slice diagram (p. 921), a UML-extended role slice diagram for collaboration steps (p. 935), a proposed UML COD/AWF Obligation Slice Diagram (p. 936), a collaboration workflow (p. 937), and other explanatory diagrams.
Interacting Software Components
Fadoua Rehioui and Abdellatif Hair’s “Towards a New Combination and Communication Approach of Software Components” (Ch. 43) reconceptualizes ways to reorganize software components based on a viewpoint approach, of linking software components to particular types of system users, and assigning a Manager Software component to enable communications between the various system components. The effort is to ultimately suggest a pattern “that ensures the combination and communication between software components” (p. 941).
Software Development Processes in Small Firms in Developing Countries
Delroy Chevers, Annette M. Mills, Evan Duggan, and Stanford Moore’s “An Evaluation of Software Development Practices among Small Firms in Developing Countries: A Test of a Simplified Software Process Improvement Model” (Ch. 44) proposes a new approach to software process improvement (SPI) programs for use in developing countries and small firms with limited resources. Such entities have lesser capacity to deal with potential failures. Their approach involves 10 key software development practices, supported by project management technology. Some ideas that inform this tool are that institutionalized quality practices in an organization stand to improve software quality; likewise, people skills can be critical. A study of 112 developer/user dyads from four developing English-speaking Caribbean countries found a positive impact on software product quality (p. 955), based on this version of the tool.
An Internationalized Computer Science Curriculum around Software Engineering
Liguo Yu’s “From Teaching Software Engineering Locally and Globally to Devising an Internationalized Computer Science Curriculum” (Ch. 45) shares learning from the author’s teaching of software engineering to two universities, one in the U.S. and one in the P.R.C. The teaching of “non-technical software engineering skills” in an international curriculum may be challenging given the differences in respective cultures, business environments, and government policies. To achieve effective learning, the professor has to apply flexibility in integrating “common core learning standards and adjustable custom learning standards” (p. 984). The course is built around problem-based learning, which raises particular challenges:
- “How to design an assignment so that it is more similar to a real-world problem?
- How to help students solve a problem so that appropriate assistance should be given to students without suppressing their creativity?
- How to evaluate students’ performance if the assigned problem is not solved or not completely solved?” (p. 985)
In these scenarios, students take on role-playing positions in particular scenarios and problem-solving with IT solutions (Yu, 2021, pp. 990-991), such as how to reduce healthcare costs (p. 992), enhance community safety (p. 992), promote food safety (p. 994), and others. Part of the learning objective is to have students familiarize themselves with different domains in which IT may be applied (p. 996). The two locations of the students highlights some of the potential geopolitical sensitivities given the great power competition. Perhaps such courses may serve as bridging mechanisms between peoples.
Identifying Free Open-Source Software for Library Resources
David William Schuster’s “Selection Process for Free Open Source Software” (Ch. 46) shares a systematic approach for how a (public?) library may make a decision about making open source software accessible as part of its holdings. This work brings up issues of eliciting a requirements list from the staff, from the community, and even from the software makers. There are issues of software compatibility with other relevant technologies. There are legalities, technical issues (installation, maintenance, and others), user support, and other considerations. The author also points to considerations into the future, such as maintenance and upgrades.
Security Considerations in Agile Software Development
Kalle Rindell, Sami Hyrynsalmi, and Ville Leppänen’s “Fitting Security into Agile Software Development” (Ch. 47) highlights some of the incompatibilities between the security engineering and the software development process. While security practice should be continuous, it is often “a formal review at a fixed point in time, not a continual process truly incorporated into the software development process” (p. 1029). The researchers offer a simplified iterative security development process with a security overlay at the various phases of software development (p. 1030), including requirement gathering, design, implementation, verification, release, and operations. They cite the importance of keeping of a vulnerability database. They emphasize the importance of security assurance for software.
Three Dimensional Medical Images
Mohamed Fawzy Aly and Mahmood A. Mahmood’s “3D Medical Images Compression” (Ch. 48) highlights the importance of enabling efficient and effective compression of these large images for both transfer and storage. In the process, the critical visual information has to be retained for analysis and record-keeping. The coauthors describe the current processes, with limitations.
Software Bug Triaging
Anjali Goyal and Neetu Sardana’s “Analytical Study on Bug Triaging Practices” (Ch. 49) explores a range of methods for identifying the most critical bugs to fix and assigning those to a developer, in order to correct the source code. In their summary, they examine various approaches: classifiers, recommender systems, bug repositories, and other technologies and methods. Different bug assignment techniques include the following: “machine learning, information retrieval, tossing graphs, fuzzy set, Euclidean distance, social network based techniques, information extraction, (and) auction based technique” (p. 1079), in stand-alone and combinatorial ways. Over the years, various of the techniques have predominated, and from about 2007 onwards, information retrieval seems to have predominated (vs. machine learning, and vs. machine learning and information retrieval, together) (p. 1082). Various techniques have been used to assess the effectiveness of the bug assignment, including “precision, recall, accuracy, F score, hit ratio, mean reciprocal rank, mean average precision, and top N rank” (p. 1083). At present, even with the many advances of bug report assignment methods, the process is not fully automated. There are various problems, described as “new developer problem, developers switching teams and deficiency in defined benchmarks” (p. 1088). The research survey-based research work culminates in six research questions that may be used to address bug triaging and to help others formulate plans for optimal bug triaging.
Machine Learning Prediction of Software Abnormal States
Software systems may age out, in a phenomenon termed “smooth degradation” or “chronics,” in Yongquan Yan and Ping Guo’s “Predicting Software Abnormal State by using Classification Algorithm” (Ch. 50) (2021, p. 1095). The appearance of aging is preceded by a long delay even as the phenomenon is occurring. The authors propose a method for detecting software aging: “Firstly, the authors use proposed stepwise forward selection algorithm and stepwise backward selection algorithm to find a proper subset of variables set. Secondly, a classification algorithm is used to model (the) software aging process. Lastly, t-test with k-fold cross validation is used to compare performance of two classification algorithms” (p. 1095). The method is tested in this research and is found effective and efficient.
Building in Quality to Software Systems
Zouheyr Tamrabet, Toufik Marir, and Farid Mokhati’s “A Survey on Quality Attributes and Quality Models for Embedded Software” (Ch. 51) provides a summary of the research literature on this topic of how “quality” is conceptualized and actualized and measured and modeled in various works for software embedded in various systems.
Video Compression
Wei Li, Fan Zhao, Peng Ren, and Zheng Xiang’s “A Novel Adaptive Scanning Approach for Effective H.265/HEVC Entropy Coding” (Ch. 52) proposes improvements to the current compression of video in the H.265 format.
Human Factors in Software Development
Sergey Zykov’s “Software Development Crisis: Human-Related Factors' Influence on Enterprise Agility” (Ch. 53) focuses on the need for both “techno-logical and anthropic-oriented” factors to create effective software. To head off potential misconceptions between customers and developers, Zykov suggests that both technical and soft skills are required, with learned general “architectural” practices of technical and human organization and intercommunications to improve the work—based on lessons learned in other fields, like the nuclear power industry, the oil and gas industry, and others.
“Taming” Openness in Software Innovation
Mehmet Gencer and Beyza Oba’s “Taming of ‘Openness' in Software Innovation Systems” (Ch. 54) explores how to balance the “virtues of OSS community while introducing corporate discipline” without driving away the volunteers or contributors to open source software projects (p. 1163). Large-scale OSS flourish in innovative ecosystems conducive to R&D. Such projects are open to feedback from users, who have different needs from the OSS. If openness and creativity are “wild,” perhaps instantiating some order is seen as “taming.” This project involves the study of six community-led cases of OSS with wide usage: Apache, Linux, Eclipse, Mozilla, GCC (GNU Compiler Collection), and Android (p. 1164). This cross-case analysis results in findings of various governance mechanisms that bring some taming to the work. One common approach involves voting based schemes in which developers who are skilled and aligned with the values of the community are upvoted (p. 1169) and so gain collective credibility. Other taming elements in OSS communities include licensing regimes, strategic decision making approaches, organizational structures / leadership structures, quality assurance standards, and others (p. 1172).
SaaS, Semantic Web, Big Data
Kijpokin Kasemsap’s “Software as a Service, Semantic Web, and Big Data: Theories and Applications” (Ch. 55) strives to stitch together SaaS, the Semantic Web, and Big Data of the title, but this work does not really say more than that these are computational capabilities that are somewhat contemporaneous. The argument is that knowledge of these capabilities are important for organizational performance, but a mere summary without further analysis is not as powerful as this work could be.
Requirements Engineering (RE) for Software Ecosystems
Aparna Vegendla, Anh Nguyen Duc, Shang Gao, and Guttorm Sindre’s “A Systematic Mapping Study on Requirements Engineering in Software Ecosystems” (Ch. 56) involves a study of the published research literature on software ecosystems or SECOs. The researchers found that research was “performed on security, performance and testability” but did not include much in the way of “reliability, safety, maintainability, transparency, usability” (p. 1202). This review work suggests that there may be areas that would benefit from further research.
Going Agile in Alignment with ISO/IEC 29110 Entry Profile
Sergio Galvan-Cruz, Manuel Mora, Rory V. O'Connor, Francisco Acosta, and Francisco Álvarez’s “An Objective Compliance Analysis of Project Management Process in Main Agile Methodologies with the ISO/IEC 29110 Entry Profile” (Ch. 57) identifies gaps in two industrial ASDMs (agile software development methodologies” and the ISO/IEC 29110 Entry Profile…but finds closer adherence with the academic ASDM (UPEDU) which “fits the standard very well but…is scarcely used by VSEs” (or “very small entities” perhaps due to a “knowledge gap” (p. 1227). Such works can provide a helpful word-of-mouth and may have an effect on the uptake of particular project management approaches.
Small Packaged Software Vendor Enterprises
Moutasm Tamimi and Issam Jebreen’s “A Systematic Snapshot of Small Packaged Software Vendors' Enterprises” (Ch. 58) involves work collection over 100 articles about small packaged software vendors’ enterprises (SPSVEs). The systematic search for these articles involved a range of search strings in various databases. The authors used a “systematic snapshot mapping” (SSM) method (p. 1262). They collected the works in a database. They offer some light insights about these enterprises, such as the software lifecycle for SPSVEs.
Distributed Usability Testing
There are constructive collaborations to be had between those in academia and in industry, particularly in the area of usability and user experience (UX) design. Amber L. Lancaster and Dave Yeats’ “Establishing Academic-Industry Partnerships: A Transdisciplinary Research Model for Distributed Usability Testing” (Ch. 59) describes a constructive experience in which graduate students applied to work as co-investigators in a transdisciplinary exploration of a product’s usability. The team ultimately included users, a research team comprised of usability researchers, technical writers, and IT professionals…and additional stakeholders including “software developers, product managers, legal professionals, and designers” (p. 1293). The work that follows reads as intensive and professional, with the student team following in-depth test protocols and using various design scenarios to elicit feedback from users. This case is used to laud academic-industry partnerships to advance the professional applicability of the curriculum and pedagogy.
Streaming Coded Video on Peer-to-Peer Networks
Muhammad Salman Raheel, and Raad Raad’s “Streaming Coded Video in P2P Networks (Ch. 60) offers a proposed solution to the delivery of video on peer-to-peer networks even if there are different video coding techniques used on the videos [Scalable Video Coding, Multiple Descriptive Coding, and others] while controlling for playback latency in multimedia streaming and other quality service features (the ability to find relevant contents, service reliability, security threats, and others). What follows is a summary of the current video coding techniques and streaming methods, and their respective strengths and weaknesses.
ISO and Government Policy
Veeraporn Siddoo and Noppachai Wongsai’s “Factors Influencing the Adoption of ISO/IEC 29110 in Thai Government Projects: A Case Study” (Ch. 61) focuses on an international process lifecycle standard designed for very small entities (VSEs) (p. 1340). The research team elicited feedback from four Thai government organizations that attained the ISO/IEC 29110 Basic Profile Certification to better understand what contributed to their successful implementation of the standards and then what barriers they faced. They found that the success factors included the following: “supportive organizational policy, staff participation, availability of time and resources for the improvement of the software process, consultations with the SIPA and team commitment and recognition” (p. 1340). The barriers they found include “time constraints, lack of experience, documentation load, unsynchronized means of communication and improper project selection” (p. 1340), although some of the barriers seemed to be local work conditions and work processes (and not related to the standards). This work shows the importance of understanding how a government deploys its resources for ICT and its integration in work. It is also suggestive of the need for further supports if the standards are to be adopted and applied successfully. The research here is qualitative, and this chapter includes some quotes from the test subjects in the study to add insight and human interest.
Automated Framework for Software Process Model
Swati Dhingra, Mythili Thirugnanam, Poorvi Dodwad, and Meghna Madan’s “Automated Framework for Software Process Model Selection Based on Soft Computing Approach” (Ch. 62) studies factors which affect what process model is used for software development projects to create a rigorous program that meets needs under budget and with the lowest numbers of faults possible, over a software lifespan. This work includes a review of the literature and a survey with respondents representing different professional roles in IT. They use an automated framework for selecting the process model based on an inferential “fuzzy-based rule engine” and a J-48 decision tree considering various factors (p. 1367). Theirs is a model to inform which process model may be most applicable for a particular project and to ultimately inform the work of a project managers and others.
TrimCloud in Developing Countries for Education?
Beatriz Adriana Gomez and Kailash Evans’ “A Practical Application of TrimCloud: Using TrimCloud as an Educational Technology in Developing Countries” (Ch. 63) makes the case of harnessing an open-source virtual desktop infrastructure in developing countries for educational usage, to host software and desktops. In the abstract, the coauthors argue for “refurbished legacy systems as the alternative hardware source for using TrimCloud” (p. 1391). Ironically, it does not seem that TrimCloud exists anymore based on a Google search. There are only a few references to this article.
Requirement Defect and Execution Flow Dependency(??)
Priyanka Chandani and Chetna Gupta’s “An Exhaustive Requirement Analysis Approach to Estimate Risk Using Requirement Defect and Execution Flow Dependency for Software Development” (Ch. 64) focuses on how to lower the risks of project failure by conducting a thorough early review of business requirements and required functionalities. Then, too, there should be an assessment of requirement defects, which are a major challenge because “they prevent smooth operation and is (sic) taxing both in terms of tracking and validation”) (p. 1405). Accurate requirements engineering (RE) is often conducted early on in a project. That step should include assessments of technical challenges, path dependencies (and various related “calls” to certain functions), and a cumulative project risk assessment. If possible, there should be risk ratings applied to particular endeavors (based on project requirements).
Customizing Packaged Software
Bryon Balint’s “To Code or Not to Code: Obtaining Value From the Customization of Packaged Application Software” (Ch. 65) echoes an earlier work with a different title. This work refers to a method for weighing the pros and cons of customizing packaged application software, of various types. Custom developments, according to the cited researcher, are for four basic reasons: “the gap in fit,” support for “user acceptance” (p. 1429), integration, and system performance improvement (p. 1430). The costs of such customizations are non-minimal, for the development and then continuing maintenance (p. 1431). The author models out various dynamics, finding that early fit of the technology to needs lowers the cost of customizations (p. 1432) and that keeping the costs of development down raises the overall value of the system (p. 1432). His model also sets “inflection points” at which as the starting fit increases, the net value of the custom development increases and another point at which as the starting fit increases, the net value of the custom development decreases (p. 1433). Increasing user acceptance benefits the value of the software (p. 1433). The essential ideas are reasonable and intuitive.
Manual Coding of Group Selfies
Shalin Hai-Jew’s “Creating an Instrument for the Manual Coding and Exploration of Group Selfies on the Social Web” (Ch. 66) was created by the reviewer, so this will not be reviewed here.
Cloud Traffic Classification for Prioritization
The Internet of Things (IoT) and cloud computing and software-defined networking requires new standards to enable smooth functioning, according to Mohit Mathur, Mamta Madan, and Kavita Chaudhary’s “A Satiated Method for Cloud Traffic Classification in Software Defined Network Environment” (Ch. 67). This work explores a method to mark cloud traffic to enable prioritization using DSCP of IP header (p. 1509) based on a differentiated services architecture (p. 1511).
Collaboration around Agile Software Engineering in Higher Education???
Pankaj Kamthan’s “On the Nature of Collaborations in Agile Software Engineering Course Projects” (Ch. 68) focuses on the different types of learning collaborations in course projects in software engineering education. An important part of the skillset involves soft skills. This work describes different collaboration patterns in the learning space: student-student, team-teaching assistant, team-teacher, team-internal representative (like capstone projects, with examiners), and team-external representative (like capstone projects, with a “customer”) (p. 1540). These projects cover a variety of hands-on and experiential learning based on problem-solving with innovations and technical knowledge. This is a well-presented and substantive work.
Improving Open Systems
James Austin Cowling and Wendy K. Ivins’ “Assessing the Potential Improvement an Open Systems Development Perspective Could Offer to the Software Evolution Paradigm” (Ch. 69) asks how software evolution may be improved and be more responsive to client needs. The coauthors find power in three “divergent” methodologies: Plan-Driven, Agile, and Open Source Software Development (p. 1553). An open source approach entails stakeholders who collaborate around a shared broadscale endeavor. There are arbitration factors that enable governance and shared decision making. There are software artefacts deployed to stakeholders’ environments to “ensure ongoing system viability” (p. 1560). While Plan-driven or Agile methods are deployed, there is a “focus on quality and fitness-for-purpose” based on exploration of customer needs which is absent in open-source endeavors (p. 1563). Open source software has to provide value to its community even as there is “lack of definition, prediction and monitoring of a likely return on investment,” which makes this approach “a significant challenge” for adoption in a corporate setting (p. 1563). What is beneficial to planned and agile methods may be “delivery measurement practices, refinement of agreements in principle into requirements, and open engagement across a wide stakeholder community” (p. 1563).
Offshore Software Testing
Software engineering is a global process, with employees hailing from different locations. Tabata Pérez Rentería y Hernández, and Nicola Marsden’s “Offshore Software Testing in the Automotive Industry: A Case Study” (Ch. 70) explores the experiences of testers in India working for an automotive supplier to a Germany company. Their mixed method study of the testers’ experiences included semi-structured interviews. These researchers found that “manual testing was a boring activity when done over a period of time” (p. 1581), especially among more experienced testers. Many testers felt that they did not receive the level of respect or recognition that developers do (p. 1583). They wanted more time allotted for automated testing (p. 1584). Also, the researchers found that “sharing equipment was a frequent problem that testers face. Testers have to hunt for equipment either among other testers or developers” (p. 1584). Companies do well to promote the well-being of their employees in all locations.
Computational Thinking in Primary Schools
Gary Wong, Shan Jiang, and Runzhi Kong’s “Computational Thinking and Multifaceted Skills: A Qualitative Study in Primary Schools” (Ch. 71) involved a study held at two primary schools in Hong Kong to study the efficacy of teaching computational thinking through visual programming tools to children. The qualitative research includes “classroom observations, field notes and group interviews” and also a “child-centered interview protocol to find out the perception of children in learning how to code” (p. 1592), such as whether or not they felt the process helped their problem-solving and creativity. The researchers share their pedagogical design framework, their teaching methods, their research protocols, and their findings, in a methodical and reasoned work.
Agile Impacts on Software Processes
An important management function involves striving to achieve greater work efficiencies, accuracy, and productivity. George Leal Jamil and Rodrigo Almeida de Oliveira’s “Impact Assessment of Policies and Practices for Agile Software Process Improvement: An Approach Using Dynamic Simulation Systems and Six Sigma” (Ch. 72) proposes the use of computer simulation models for evaluating software quality improvement. Their approach is based on using Six Sigma (6 σ) methodology to find areas to improve work processes. Their test of the simulated model was shown to have measurable benefits: “The earnings with the new version of the case exceed by more than 50% the Sigma level, the quality of software developed, and reduction of more than 55% of the time of development of the project” (p. 1616). Given global competition, companies must necessary use every edge to improve.
Agent-Based Software Engineering
Yves Wautelet, Christophe Schinckus, and Manuel Kolp’s “Agent-Based Software Engineering, Paradigm Shift, or Research Program Evolution” (Ch. 73) suggests that an over-use of programming concepts and “not…organizational and human ones” can lead to “ontological and semantic gaps between the (information) systems and their environments.” To rectify this issue, they suggest that the use of multi-agent systems may help realign information systems with the people who use them, by “offering modeling tools based on organizational concepts (actors, agents, goals, objectives, responsibilities, social dependencies, etc.) as fundamentals to conceive systems through all the development process” (p. 1642). In this approach, the agent has autonomy, functions in a particular situation, and has designed flexibility in terms of actions (p. 1645).
Prison Education and Computer Science
Ezekiel U. Okike’s “Computer Science and Prison Education” (Ch. 74) proposes that national governments in developing countries should institute computer science as part of prison education, so inmates may achieve gainful employment when they leave incarceration, and so they may reacclimate to societies who have computer technologies integrated into so many facets. “Computer science” is defined as “the study of computers and computational systems” (p. 1656), with the pragmatic aspects emphasized here. There can be problem solving methods applied using CS, and there are various available career paths in firms of all sizes. The work continues by exploring various aspects of computer science and identifies how this knowledge, skills, and abilities in this space may benefit those in prison by enabling them to reform and acquire work. Some effective programs at various prisons are highlighted.
IT-Related Dilemmas and Human Decision Making
Chen Zhang, Judith C. Simon, Euntae “Ted” Lee’s “An Empirical Investigation of Decision Making in IT-Related Dilemmas: Impact of Positive and Negative Consequence Information” (Ch. 75) uses a vignette-based survey to better understand individual decision making and intentions in regards to IT security and privacy. Of particular interest is the “deterrent role of information about possible negative consequence in these situations” (p. 1671). The researchers observe that deterrent information influence “is greater in situations involving software products than in situations involving data and for individuals with a higher level of fundamental concern for the welfare of others” (p. 1671). Relevant information can be consequential in informing human behaviors related to information technologies although these behaviors are also informed by “individual factors and situational factors” (p. 1671). Those with more idealistic ethics were more responsive than those with relative ones. Also, these researchers found that information about negative consequence was more motivating than information about positive consequence (p. 1685).
Software Piracy and IP
Michael D'Rosario’s “Intellectual Property Regulation, and Software Piracy, a Predictive Model” (Ch. 76) found that using a multilayer perceptron model to analyze IP piracy behaviors in the aftermath of IP regulations (IPRs) was better at predicting outcomes than other modeling methods. The data are focused on ASEAN member countries and a review of a dataset of various IP laws and observations of IP infringements (in WTO cases).
The author uses a three-layer multilayer perceptron model (MLP) artificial neural network (ANN) “with an input layer deriving from the variables provided by Shadlen (2005). Software is the variable denoting the rate of software piracy. Bilateral Investment denoted the level of advantage afforded through any bilateral investment treaty. WTO Case is a dummy variable pertaining to the existence of a case under review in the international courts relating to an intellectual property dispute, respectively. The U.S. 301 is available denoting inclusion within a USTR 301 report. Trade dependence is the critical trade relationship variable, accounting for the trade dependence of the ASEAN member country and the US and Canada” (D’Rosario, 2021, p. 1697). Their model was able to predict the rate of piracy at “100 percent, across the ASEAN panel” (p. 1699), better than regression models when the focus is on outcome prediction (p. 1701).
Learning Maya 3D with Video Tutorials
Theodor Wyeld’s “Using Video Tutorials to Learn Maya 3D for Creative Outcomes: A Case Study in Increasing Student Satisfaction by Reducing Cognitive Load” (Ch. 77) describes a transition from front-of-the-classroom teaching demonstrations of software to the uses of custom-generated video tutorials, based on Mayer and Moreno’s theory of multimedia learning (2003). They found that university students ranked their satisfaction higher with video tutorials because of a sense of reduced cognitive load, to learn Maya 3D with step-by-step directions. This work also includes the use of a PDF Tutorial to have open while doing particular procedural assignments in Maya 3D, which many consider a fairly complex software program. The benefit of tutorial videos is replicated in other teaching and learning contexts as well.
Fault Proneness Testing in Open Source Software
D. Jeya Mala’s “Investigating the Effect of Sensitivity and Severity Analysis on Fault Proneness in Open Source Software” (Ch. 78) notes the criticality of identifying (particular high-impact) faults in open source software. Some faults require dynamic code analysis to identify “as some of the components seem to be normal but still have higher level of impact on the other components” (p. 1743). This study focuses on “how sensitive a component is and how severe will be the impact of it on other components in the system” if it malfunctions (p. 1743). The author has designed a tool to apply a “criticality index of each component by means of sensitivity and severity analysis using the static design matrix and dynamic source code metrics” (p. 1743).
Process Improvement in Web-Based Projects
Thamer Al-Rousan and Hasan Abualese’s “The Importance of Process Improvement in Web-Based Projects” (Ch. 79) explores how well software process improvement models may apply in web-based projects spaces for smaller companies. Is there a fit? Is there room for benefit? These questions are being asked in a context of high failure rates for the application of such process improvement efforts and a reluctance to take these methods on given “complex structure and difficult implementation methods” (p. 1770). Resistance (political, cultural, goals, and change management) may exist in the organization (p. 1774). Various models are explored for their suitability for application in the described context albeit without one that can tick on the boxes at present.
A Biological Computer Simulation for Research
Roman Bauer, Lukas Breitwieser, Alberto Di Meglio, Leonard Johard, Marcus Kaiser, Marco Manca, Manuel Mazzara, Fons Rademakers, Max Talanov, and Alexander Dmitrievich Tchitchigin’s “The BioDynaMo Project: Experience Report” (Ch. 80) focuses on the affordances of scientific investigations using computer simulations, which are powered now by high performance computing and hybrid cloud capabilities (which enables scaling). These simulations may be run to answer particular scientific questions. Setting up such research often requires interdisciplinarity.
Software Defect Prediction
Misha Kakkar, Sarika Jain, Abhay Bansal, and P.S. Grover’s “Combining Data Preprocessing Methods with Imputation Techniques for Software Defect Prediction” (Ch. 81) involves a study to find the “best-suited imputation technique for handling missing values” in a Software Defect Prediction model (p. 1792). The researchers test five machine learning algorithms for model development for software defect prediction from the (incomplete) data, and then, these models are tested for performance. This team found that “linear regression” with correlation based feature selector results in the most accurate imputed values (p. 1792).
Bringing out the Best in Employees
Mary-Luz Sanchez-Gordon’s “Getting the Best out of People in Small Software Companies: ISO/IEC 29110 and ISO 10018 Standards” (Ch. 82) suggest that human factors are critical on software development in smaller firms. This chapter provides “a holistic view of human factors on software process” in Very Small Entities (VSEs), in a software process defined in ISO/IEC 29110. The author proposes an “enhanced implementation of ISO/IEC 29110 standard based on ISO 10018” (p. 1812), to consider human factors, which inform issues of “communication, responsibility and authority, awareness, education and learning, and recognition and rewards” (pp. 1822 – 1823). The system is a three tiered one. At the basic first level, managers need to be “better listeners” and encourage “open communication”; they need to “establish mechanisms for recognition and rewards” (p. 1824). In the second level, they can work “on education and learning, responsibility and authority, and teamwork and collaboration” (p. 1824). On the third level, managers can “keep on developing the factors of attitude and motivation, engagement, empowerment” and then work on “networking, engagement and creativity and innovation” (p. 1825). There is an assumption that earlier levels need to be achieved satisfactorily before advancing to higher ones because of dependencies.
Evolution of ISO/IEC 29110 Standards
Rory V. O'Connor and Claude Y. Laporte’s “The Evolution of the ISO/IEC 29110 Set of Standards and Guides” (Ch. 83) tries to remediate the reluctance of small organizations to adopt software and systems engineering standards, often because such systems are seen to be created for larger organization with more staffing and resources. The coauthors offer a historical view of the development of the ISO/IEC 29110 standards and related components. Rationale for the development of this standard was to “assist very small companies in adopting the standards” (p. 1831). The chapter offers clear explanations in text, flowcharts, and diagrams, and it serves as a bridge to the resource for VSEs.
E-Commerce Software
If knowledge transfer is a basis for critical competitive advantage for small and medium-sized enterprises (SMEs), how are they supposed to capture such tacit knowledge and retain it for applied usage, especially from e-commerce software projects? Kung Wang, Hsin Chang Lu, Rich C. Lee, and Shu-Yu Yeh’s “Knowledge Transfer, Knowledge-Based Resources, and Capabilities in E-Commerce Software Projects” (Ch. 84) tackles the prior question and aims for their chapter to serve as “a clear guide to project managers in their team building and recruiting” (p. 1856). This research is based on real-world case studies from primary research.
A Review of the Literature: Security and Agile
Another work also addresses how to address software security in agile software development. Ronald Jabangwe, Kati Kuusinen, Klaus R Riisom, Martin S Hubel, Hasan M Alradhi, and Niels Bonde Nielsen’s “Challenges and Solutions for Addressing Software Security in Agile Software Development: A Literature Review and Rigor and Relevance Assessment” (Ch. 85) provides a literature review on this topic and offers that “there are ongoing efforts to integrate security-practices in agile methods” (p. 1875).
Developer Sentiment in Software Engineering
Developers play a critical role in software development, and as people, they experience emotions as individuals and as groups. Md Rakibul Islam and Minhaz F. Zibran’s “Exploration and Exploitation of Developers' Sentimental Variations in Software Engineering” (Ch. 86) shares an empirical study of “the emotional variations in different types of development activities (e.g., bug-fixing tasks), development periods (i.e., days and times), and in projects of different sizes involving teams of variant sizes” and also strove to look at the impacts of emotions on “commit comments” (p. 1889). They explore ways to exploit human emotion awareness to improve “task assignments and collaborations” (p. 1889). Another pattern that they found: “…emotional scores (positive, negative and cumulative) for energy-aware commit messages are much higher than those in commit messages for four other tasks” (bug fixing, new feature, refactoring, and security-related) (p. 1896). Commit messages which are posted during the implementation of new features and security-related tasks “show more negative emotions than positive ones. Opposite observations are evident for commit messages for three other types of tasks” (bug-fixing, energy-aware, and refactoring) (p. 1897).
The authors add:
Significant positive correlation is found between the lengths of commit messages and the emotions expressed in developers. When the developers remain emotionally active, they tend to write longer commit comments. The developers tend to render in them more positive emotions when they work in smaller projects or in smaller development teams, although the difference is not very large. Surprisingly, no significant variations are found in the developers’ emotions in commit messages posted in different times and days of a week. (Islam & Zibran, 2021, p. 1908)
In their work, they took efforts to establish construct validity and reliability.
Productivity in Software Development
If software making were a factory, how should the managers measure productivity, and then align the organization to productively make software to high standards? So ask Pedro S. Castañeda Vargas and David Mauricio in “A Review of Literature About Models and Factors of Productivity in the Software Factory” (Ch. 87). A review of the literature from 2005 – 2017 resulted in the identification of 74 factors (related to programming, analysis, and design and testing) and 10 models (p. 1911). This systematic study found that most factors related to productivity of software making related to programming. There are some statistical techniques for measuring software productivity but fall short because while many offer a function, “they do not lead to a formula…only mentioning several factors to take into account in the measurement” (p. 1929). This work suggests that there is more to be done in this space. [One cannot help but think that managers have some informal ways of assessing productivity, assuming they have full information.]
Service Sector Software Bug Handling
Anjali Goyal and Neetu Sardana’s “Bug Handling in Service Sector Software” (Ch. 88) provides a summary of the software life cycle and the criticality of identifying bugs and mitigating them throughout, or risk “serious financial consequences” (p. 1941). This work focuses on various bug handling approaches to the technologies. Ideally, the higher risk bugs with potential severe outcomes are addressed as quickly as possible, while controlling against unintended potential risks from the fix. There are challenges with misidentification of bugs, a “heavy flow of reported bugs,” and other challenges (p. 1954). Little insights are offered about the technologies used in the service sector, however, the title notwithstanding.
Software-Defined Radio
Nikhil Kumar Marriwala, Om Prakash Sahu, and Anil Vohra’s “Secure Baseband Techniques for Generic Transceiver Architecture for Software-Defined Radio” (Ch. 89) takes on the challenge of how to set up an effective software-defined radio (SDR) system that can handle corrupted signals through “forward error-correcting (FEC) codes” (p. 1961), in the absence of central standards. The worked problem here involves having an effective architecture for hardware and software. SDR systems are used for testing systems, collaboration, military-based radio communications, and international connectivity (p. 1964).
Smart Energy Management Systems
Amir Manzoor’s “Contemporary Energy Management Systems and Future Prospects” (Ch. 90) focuses on the transition from a more traditional electric grid to a smart one (with energy management systems) potentially resulting in increased energy efficiency. The data are not only for personal households but also organizations, businesses, industries, and other scales of entities. The thinking is that heightened awareness of energy consumption patterns enables people to better control their energy consumption. This metering and notification enablement requires advanced analytics and ICT. Such an approach is compelling in a time of heightened awareness of energy consumption, mass-scale anthropomorphic effects on the larger environment, and the closing window during which humanity may head off an environmental catastrophe (by reducing carbon emissions, by capturing carbon, and other efforts). This work explores standards applied to these technologies and includes explorations of various brand name products in the space.
Effective Practices for Global Software Development Teams
Muhammad Wasim Bhatti and Ali Ahsan’s “Effective Communication among Globally Distributed Software Development Teams: Development of an ‘Effective Communication’ Scale” (Ch. 91) suggests that international collaborations benefit from best practices, which include four factors: “stakeholders’ involvement, acculturation, usage of appropriate tools and technology, and information available,” for effective communication (p. 2014) and a related scale, based on a review of the literature and a survey of those working in global software development (p. 2033).
Prantosh Kumar Paul’s “The Context of IST for Solid Information Retrieval and Infrastructure Building: Study of Developing Country” (Ch. 92) focuses on the importance of academic innovation, to benefit the economic development of developing countries. Such countries have gaps between “industrial needs and the availability of skilled labor” (p. 2040), and if human resources may be built up, they can advance society and human well-being. Information Sciences and Computing is seen as an important area in which to build human capital, particularly in the areas of “Cloud Computing, Green Computing, Green Systems, Big-Data Science, Internet, Business Analytics, and Business Intelligence” (p. 2040). Paul conducts a “strengths, weaknesses, opportunities, threats” (SWOT) analysis of the academic offerings in this area and generalizes various approaches based on a review of the literature. This work shows various modes of teaching and learning and various pedagogical approaches.
Facing the Challenges of Design Complexity
Dermott John James McMeel and Robert Amor’s “Knitting Patterns: Managing Design Complexity with Computation” (Ch. 93) focuses on intercommunications between proprietary software systems related to design, to extend various capabilities. The team works to “leverage emergent device and data ecosystems” to “knit” devices and services (p. 2055), so that proprietary software programs are not closed systems unto themselves, in the design and construction process. In a design context, software programs serve various “participant interactions, which includes designers, sub-contractors and project managers operating within smaller areas of the overall project” (McMeel & Amor, 2011, as cited in McMeel & Amor, 2021, p. 2056). They describe this shift in the field:
Information management within design continues to gather importance. This is perhaps best reflected in the shift from the acronym CAD (Computer Aided Design) to BIM (Building Information Model), where geometry is only part of the representation of buildings. In Autodesk’s Revit building information modeling software—for example—you cannot simply move your building from one point to another, there is information present about its relationships with other components that prevent particular translations or movements. Within this context computation becomes a useful asset for managing rules and behaviors that are desirable, if not always in the fore when decision-making. (McMeel & Amor, 2021, p. 2058)
They describe several cases:
Our first test case—an agent-based dynamic simulation combining natural and built environmental components—is deployed to explore the city as a multitude of interrelated natural and built patterns. We analyze the role this simulation might play in managing the complexities of rebuilding a sustainable urban environment after the devastating earthquake in Christchurch, New Zealand. The second test case deploys an iPad application to communicate with a BIM model—exploring the development of a mobile application and methodology for openly communicating outside of the intended software family. (McMeel & Amor, 2021, p. 2055)
This work is achieved with plug-ins and other tools.
Conclusion
A book of the size of Research Anthology on Recent Trends, Tools, and Implications of Computer Programming (4 volumes), with some 90+ chapters, is not the most conducive to review. The works read a little as a grab bag of topics, with insights both for general readers and specialists. This work was collated by an editorial team within the publishing house, and the works have all been published prior in other texts by the publisher. Regardless of the number of works, this collection leaves a lot of ground uncovered in terms of software and programming and so on, given the breadth of application of computer programming in the world.
The review was conducted over weeks of readings. After reading this work, one wonders at how transferable complex computing skillsets may be, given the depth of specializations. The works range from generalist works to specialist ones, with the latter using words, code, equations, mathematical notations, datasets, and diagrams.
About the Author
Shalin Hai-Jew works as an instructional designer / researcher at Kansas State University. Her email is shalin@ksu.edu.
Previous page on path | Cover, page 19 of 21 | Next page on path |
Discussion of "Book review: A broadscale view of applied computer programming"
Add your voice to this discussion.
Checking your signed in status ...