Sign in or register
for additional privileges

C2C Digital Magazine (Spring / Summer 2019)

Colleague 2 Colleague, Author
Cover, page 18 of 23

 

You appear to be using an older verion of Internet Explorer. For the best experience please upgrade your IE version or switch to a another web browser.

Book Review: Practical Interdisciplinary Approaches in Advancing Physics Education

By Shalin Hai-Jew, Kansas State University 




New Trends in Physics Education Research 
Salvatore Magazù
New York:  Nova Science Publishers 
2018 


Salvatore Magazù’s edited collection, New Trends in Physics Education Research (2018), proposes some practical ways to advance physics education. 

They include basics, such as considering the level of learning of the students (high school through post-graduate studies) and where they are in a particular learning sequence.  There is the practical approach of harnessing interdisciplinary learning to save on time and resources in the learning.  

Then, there are some applied themes that are particular to physics and related fields.  Physics research 

  • should be theoretically aligned
  • scientifically rigorous
  • hyper-precise in measurements
  • reproducible and repeatable
  • theoretically (and practically) able to be provably invalidated if untrue
  • practical for real-world problem-solving
  • clearly documented
  • and novel—first in the world, if at all possible 

…in highly competitive spaces (with much of business competitiveness and human well-being at stake.  (As cases in point, physics research has determined whether wars are won or lost, whether a nation-state’s industries lead the world or lag, whether population’s livelihoods are protected, and other critical aspects.  The costs of R&D are high for bleeding-edge research, both theoretical and applied. The world itself can be harsh in terms of determining the relevance of research work.  Poorly informed decision making can incur irretrievable costs.)  

Those professional values from physics (and the so-called hard sciences) are part and parcel of the teaching of physics.  Needless to say, these are tough standards, and achieving these objectives in learning requires faculty and learners to have an intensive background—for accurate learning and learning retention.   The harnessing of problem-based learning, sometimes using historical cases, involves much complexity.  There is a sense of developmental windows that can turn into unbridgeable gaps, if the teaching / learning and support are not sufficient.   However, there are upsides:  the current age offers a wide range of pedagogical, technological, and other tools to enable “complex comprehension” and “physical intuition” (Magazù, 2018, p. vii) and other advances in physics education.   

Oftentimes, for the science, technology, engineering and math (STEM) fields which require complex expertise, those who teach have built up the expertise but do not have pedagogical training and have long forgotten the novice learner mind.  Those who are willing to empathize with their learners and adjust to their needs may teach physics education better than those who merely teach how they themselves learned (or think they learned).  

Thinking Imaginatively and Differently

From the outside, the creative mind may not seem to be so critical for the hard sciences, but creative faculties are critical in scientific fields—to understand what is not yet defined, to conceptualize and engage new paradigms, to create new materials and processes, and to apply known methods in new ways.  Pietro Calandra’s “Managing Complexity in Material Science: The Role of Imagination” (Ch. 1) highlights the work of those exploring the physical world in a research domain where not everything is yet known (in some fields, saturation may be near).  He opens the chapter with the riskiness of false dichotomies that can set humanity in a wrong direction for generations:  

Several poets in the past believed, after the Cultural Revolution in the seventeenth century, that science was killing the beauty of art and poetry, although opinions were generally discordant in intellectual classes. For this reason, imagination has been believed to be as opposite to scientific method. (Calandra, 2018, p. 1)  

Rather, there is power in “connections between the fascinating world of imagination-driven attitude and the rigorous world of scientific progress in material science” (Calandra, 2018, p. 2).  Rationalism and defining relationships between “theory, models and experiments” (p. 2) does not preclude an appreciation for thinking deeply and broadly and conceptualizing the unseen.  

David Herbert Richard Lawrence (1885 – 1930) is evocatively quoted as blaming knowledge for removing a sense of magic from the world:  “’Knowledge’ has killed the sun, making it a ball of gas, with spots; ‘knowledge’ has killed the moon, it is a dead little earth pitted with extinct craters as with smallpox; the machine has killed the earth for us, making it a surface, more or less bumpy, that you travel over” (Calandra, 2018, p. 3).  [Interestingly, if the landing of the Mars InSight on Nov. 26, 2018 was not in any way short of magic.  There can be human romance in scientific elegance and complexity.] 

Said another way, there is “poetry in nature” per Richard Dawkins’ Unweaving the Rainbow (1998, as cited in Calandra, 2018, p. 3).  Scientists can be poets, and poets can be scientists, and each can inform the other.  Progress in science may enhance “poetry, human feelings and imagination” (p. 4).  More importantly, imagination is part of a scientist’s work, by enabling theorizing and grounds for further observations and research.  Imagination may be harnessed and targeted to help in problem solving.  It may be harnessed to identify “suitable models” to understand the world (p. 10).  To promote more usage of imagination in science, the author proposes bringing together researchers with different areas of expertise to share ideas, and more critically, he suggests that science education is late to the work of harnessing the human imagination:  “Research is already on the right way, but education is late.  Ad-hoc actions in academic courses could speed up the process with great benefit for future students.”  (p. 12)

Students’ Answering Strategies in Physics Exams

Onofrio R. Battaglia, Benedetto Di Paola, and Claudio Fazio’s “An Unsupervised Quantitative Method to Analyse Students’ Answering Strategies to a Questionnaire” (Ch. 2) describes the usage of questionnaires as diagnostic instruments in physics learning.   Physics educators have an interest in understanding how students differ in their reasoning in order to come at particular responses, so they can improve their teaching.  The students’ answering strategies may inform on their conceptual learning.  To operationalize this study, the research team used two unsupervised cluster analysis methods to analyze student text responses to open-ended questions.  Using the algorithms, the student responses are categorized into homogeneous clusters to understand different lines of reasoning.  (The unsupervised aspects of the cluster analysis means that the responses do not have to fall into an a priori human-defined set of categories.)   The close-ended questions are analyzed using numerical coding.  Finally, the open- and close-ended learner responses are emplaced in a binary matrix (present/not present) and analyzed for correlation and distance, with a resulting distance matrix (Battaglia, Di Paola, & Fazio, 2018, p. 22).  Based on the proximity between learners and the number of centroids, they can be clustered into similar groups (using the k-means clustering algorithm), with confidence intervals of 95% (p. 28).  This enables an optimal partitioning of the student response samples.   Another clustering approach involves hierarchical clustering using a hierarchical clustering algorithm (H-ClA).  The researchers found coherence between the two clustering approaches based on the shared dataset which suggests that either method could be effectively applied for analyzing survey-based test results.  

Computation and Physics

Marcos Daniel Caballero and Morten Hjorth-Jensen’s “Integrating a Computational Perspective in Physics Courses” (Ch. 3) makes a strong case for including computational knowledge, skills, and abilities (KSAs) for undergraduate students taking a physics curriculum.  They assert that physics tends to be taught in a homogeneous way to undergraduate learners.  

Physicists use various computational methods to test theories.  “Large-scale computational modeling and data analysis” capabilities have enabled some major recent discoveries in physics, including discoveries related to the Higgs boson, molecular modeling, gravitational waves, and other physics phenomena (Caballero & Hjorth-Jensen, 2018, p. 47).  Computational physics complement “analytical theory and experiments” (p. 49).  Scientific computation is critical in “the majority of today’s technological, economic and societal feats” (p. 48).  One in every two jobs in STEM fields “will be in computing” (Association for Computing Machinery, 2013)” (p. 48).  This work involves structured and disciplined approaches:  

The power of the scientific method lies in identifying a given problem as a special class of an abstract class of problems, identifying general solution methods for this class of problems, and applying a general method to the specific problem (applying means, in the case of computing, calculations by pen and paper, symbolic computing, or numerical computing by ready-made and/or self-written software). (Caballero & Hjorth-Jensen, 2018, p. 50)  

While people may work algorithms on paper to work “continuous models,” only a few of these can be “solved analytically”; further, “approximate discrete models” require the solving of differential equations to solve larger classes of problems (Caballero & Hjorth-Jensen, 2018, p. 50).  They explain:  

A typical case is that where an eigenvalue problem can allow students to study the analytical solution as well as moving to an interacting quantum mechanical case where no analytical solution exists. By merely changing the diagonal matrix elements, one can solve problems that span from classical mechanics and fluid dynamics to quantum mechanics and statistical physics.  Using essentially the same algorithm one can study physics cases that are covered by several courses, allowing teachers to focus more on the physical systems of interest. (Caballero & Hjorth-Jensen, 2018, p. 50)  

In physics education, computation (and related symbolic reasoning systems and mathematics) supports the learning, enables learner explorations in simulations, enables visualizations of physics phenomena, and helps answer complex theorized questions.  In the “zone of proximal development” for undergraduate learners, the computation skill set may include capabilities in using various technologies:  Python, R, Mathematica, Matlab; jupyter and ipython notebooks; Git for version control software and repositories like GitHub and GitLab, and markup / typesetting tools like LaTeX.  

At present, scientific programming is supported to some degree with overlaps in the related curricula:  

Moreover, one finds almost the same topics covered by the basic mathematics courses required for a physics degree, from basic calculus to linear algebra, differential equations and real analysis.  Many mathematics departments and/or computational science departments offer courses on numerical mathematics that are based on the first course in programming (Caballero & Hjorth-Jensen, 2018, p. 51)  

Additional adjustments may be made for more integrated teaching and learning of particular KSAs for undergraduates in physics education, for more seamless sequential learning.  They suggest the need for the following knowledge:  

the most fundamental algorithms for linear algebra, ordinary and partial differential equations, and optimization methods; 
numerical integration including Trapezoidal and Simpson’s rule, as well as multidimensional integrals; 
random numbers, random walks, probability distributions, Monte Carlo integration and Monte Carlo methods; 
root finding and interpolation methods; 
machine learning algorithms; and 
statistical data analysis and handling of data sets …

a working knowledge of advanced algorithms and how they can be accessed in available software; 
an understanding of approximation errors and how they can present themselves in different problems; and 
the ability to apply fundamental and advanced algorithms to classical model problems as well as real-world problems as well to assess the uncertainty of their results. (Caballero & Hjorth-Jensen, 2018, p. 52)  

They suggest that learners also need to understand how to combine numerical algorithms with symbolic calculations and then how to validate the algorithms.  The proper learning progressions require the appropriate priors on which the new learning is dependent.  To these ends, computational exercises and projects may be integrated in math courses.  This chapter reads like a manual for integrating computation in a physics education trajectory.  This chapter is based around a physics program at the University of Oslo, Norway.   This work was funded by the U.S. National Science Foundation through several grants and the Center for Computing in Science Education, University of Oslo, Norway.  

Using GeoGebra for an Interactive Math-Physics Simulation

Annarosa Serpe and Maria Giovanna Frassia’s “Legacy and Influence in Mathematics and Physics with Educational Technology: A Laboratory Example” (Ch. 4) describes the usage of a mathematical and geometrical modeling tool GeoGebra used to create a lab activity around Bréguet’s spiral applied to wrist watches to improve time precision (p. 79).  

GeoGebra is apparently in a class of software known as Dynamic Geometric Software (DGS).  Such technological integrations in physics education may motivate learners by engaging complexity (“complexity makes topics intriguing and appealing to students) and relates abstract mathematical structures to physical realities (Serpe & Frassia, 2018, p. 80) and to real problems.  A real problem “offers a learning opportunity in three dimensions:  familiarity with real-world problems, supporting knowledge and processes and skill” (p. 81).   The co-authors observe:  

From a didactic point of view, it is necessary to clarify that a model is a very distinct intellectual construction from the system it intends to represent.  Also, modelling goes beyond the mere serial reproduction because it educates to an in-depth reflection on a problem; it helps to familiarize us with several important perspectives (for example, that the same mathematical model can describe different phenomena depending on the meaning of the variables) and look at well known concepts with new eyes” (Serpe & Frassia, 2018, p. 81).  

A “DGS” allows learners “to experience mathematical facts directly, at different levels:  the students have the real chance of creating a model and work(ing) on it constructively, exploring properties, formulating conjectures and testing them through the software tools” (Serpe & Frassia, 2018, p. 81).  For the development of this lab, the co-authors did not start with the software, but they went through a four-step process involving the following phases:  “(1) Historical and philosophical approach to telling the time, (2) A device for telling the time; the mechanical watch, (3) Isolating the physical element (Bréguet’s spiral) from the mechanism, and (4) Computer Mathematics Modelling of Bréguet’s spiral” (p. 82). Such in-depth thinking and preparation often enable not only a stronger model but a more powerful application of pedagogically supported collaborative learning.  This chapter includes some engaging screenshots of the exercise in GeoGebra, including three different types of Bréguet’s spiral in this tool.  

Extra:  There are some social videos on GeoGebra...

Force at Nano-Scale

Domenico Lombardo’s “Evolution of the Concept of Force in Physics and Current Nanoscience:  New Perspectives in Teaching Programs” (Ch. 5) describes the study of forces acting “at the molecular and supramolecular levels” (Lombardo, 2018, p. 97), a necessary awareness for those creating new materials using nanoparticles and deploying them in particular contexts.  Nanotechnology education has been a part of European Union education since 2004, and the European Commission Communication “towards a European strategy for nanotechnology” highlighted several needs, including “promoting the interdisciplinary education and training of R&D personnel together with a stronger entrepreneurial mindset” (Lombardo, 2018, p. 98).  

The conceptual framework for this study begins with extant system particles in an initial configuration, but based on various physical forces, there are interactions between the particles, which change over time and result in a different end state.  The traditional forces cited include gravitation, electrostatic, magnetic, elastic, and friction; and the extended forces include “screened columbic (DLVO), hydrophobic effect, steric repulsion, hydration forces, and Van der Vaals” (Lombardo, 2018, p. 100).  Understandings of both self-assembly and “many-body interaction” enables understandings of how molecular units and multicomponent interacting units interact and result in various assembled structures and self-assembled structures (p. 100).  The author explains forces in more depth:  

One of the main characteristics of the selfassembly (sic) processes in nanoscience is the weakness of the involved forces (soft interactions) together to the involvement of a multiplicity of interaction site.  These forces (due to their action on many-body systems) have some peculiar characteristics that distinguish them from the forces treated in traditional curricula.  Despite the weakness of the interactions involved, the relevant number of these forces produce, in fact an overall effect which is strong enough to hold together different molecular structures (building blocks). Detailed treatment of the main soft (non-covalent) forces acting in nanostructures self-assembly (such as the hydrogen bonding, hydrophobic effects, screened electrostatic interaction, steric repulsion and van der Waals forces represents then an important and fundamental upgrade of modern approach in physics curriculum.  

The involvement of a multiplicity of interaction sites, which is one of the main peculiar aspect(s) of soft interactions, requires a substantial change in how these new topics must be addressed within a physics teaching program, with respect to the traditional Newtonian framework.  (Lombardo, 2018, p. 102) 

In terms of molecular dynamics, are the forces attractive or repellent or neither?  How do counterforces affect the multi-body element?  Are they occurring in certain sequences or simultaneously?  What changes are reversible, and which are not?  How accurate can the theorized modeling of these soft interaction forces be?  The author offers a visual schematic of how to understand the “molecular dynamic approach and the main interactions (force fields) within a multi-component system of nanoscience” (Lombardo, 2018, p. 105).  The author notes that experimental approaches are needed to test the molecular dynamics simulations because of the inexactness of the simulation (in a field which requires hyper-precision).  Further, there is work on supramolecular interactions between building blocks acted on by external stimuli and changing the original building blocks and supramolecular structures (p. 106).   

Applying Wavelet Transform Analysis to Conic Pendulums

M.T. Caccamo and S. Magazù’s “A Conic Pendulum of Variable Length Analyzed by Wavelets” (Ch. 6)  describes the application of wavelet transform (WT) analysis “on the time evolution of the registered signal frequency content” (p. 117) and introduces wavelet transform approaches to signal detection through mixes of drawings and formulas.  A conic pendulum is described as follows:  

…a mass, hanged by an inextensible string having negligible mass. The mass, that is under the action of gravity, moves with an initial tangential velocity.  When the frictions connected to both the suspension constraint and viscosity of the medium in which the pendulum is immersed are negligible, the mass performs a rotary motion on an ideal horizontal plan(e) with a linear velocity that is constant in modulus. Under these ideal conditions, where no energy dissipation occurs, if the pendulum length is constant in time, the mass rotates following a circular trajectory while the mass-string system describes a conic surface remaining in a state of dynamical equilibrium.  The corresponding time law projected along an arbitrary vertical plane is a sinusoidal function characterized by a constant angular velocity whose Fourier transform furnishes a peak centred at the motion angular frequency. (Caccamo & Magazù, 2018, p. 120) 

The authors observe that “signal processing…” is important in “different experimental fields, including Physics, Chemistry, Biology, Neutron scattering and Meteorology” (Caccamo & Magazù, 2018, p. 127), and more sensitive and flexible signal processing may enable the separation of signal from noise in more effective ways.  

For some social videos about wavelet transforms...

Engaging the Adiabatic Piston Problem with Arduino Board

G. Castorina, M.T. Caccamo, and S. Magazù’s “A New Approach to the Adiabatic Piston Problem through the Arduino Board and Innovative Frequency Analysis Procedures” (Ch. 7) engages a case used in the advanced thermodynamics curriculum.  This is a problem from by Crosignani and Di Porto (1996) “to examine the adiabatic piston time evolution towards the mechanical equilibrium (final pressure determination) and towards the thermal equilibrium (final temperature determination” (p. 133), to model piston functioning in a closed system. 

From the thermodynamics analysis, researchers are not able to unequivocally “determine the final state of the system” but can overcome the indeterminacy using “an appropriate kinetic model” (Castorina, Caccamo, & Magazù, 2018, p. 133).  Methodically, this research team describes their processes of setting up the adiabatic piston problem analytically through equations on the one hand and then with analog equipment, visually depicted in an evocative triptych photo (p. 147).  The co-authors wrap with an engaging summary mental map of their processes.  

Applying Acoustic Force for Particle Levitation

A. Cannuli, M.T. Caccamo, G. Sabatino, and S. Magazù’s “Acoustic Standing Waves” (Ch. 8) explores the phenomenon of acoustic levitation, based on two mechanical models.  Waves are defined as “perturbations originated from a source which, although of different nature, have in common the same characteristic equation” (2018, p. 157). They explain:  “When an acoustic wave reflects off of a surface, the interaction between its compressions and rarefactions induces interferences, that can combine to create a standing wave” (p. 165).  

For the general public crowd, levitation exists in the real of magic shows and sci-fi movies.  However, in physics, levitation techniques from “optical, electro-magnetic, electrostatic, gas-film, aerodynamic and acoustic levitation” have been invented and applied to various physical matter (pp. 159 - 160).  In this work, they study “the damped oscillations of an acoustically levitated sphere” (p. 157) and levitation forces from multiple directions applied to the levitation of particles, among other applications.  Some of the visuals included are scalograms, with correlating graphs.  

Deep Science Behind Meteorological Maps

Common users of meteorological maps may reference the evocative visuals and complementary text to understand weather in the near-future and maybe a few days out. In general, most may not know the complex science behind these or even that the study of meteorology goes back centuries. F. Colombo, M.T. Caccamo, and S. Magazù’s “Meteorological  Maps: How Are They Made and How to Read Them. A Brief History of the Synoptic Meteorology during the Last Three Centuries” (Ch. 9) explains that the study of weather and climate draw on “the laws of physics, mathematics and chemistry to simplify the understanding of the process occurring in the Earth’s atmosphere” (Colombo, Caccamo, & Magazù, 2018, p. 191).  Core base instruments used in recording weather observations hail from the late 16th and first half of the 17th centuries including “the thermometer, barometer, hydrometer, as well as wind and rain gauges” (“Meteorology,” Jan. 26, 2019) 

The authors suggest that lay consumers of weather maps may not understand the visual overlaps.  For example, in terms of a pressure systems map, there are anticyclones, cyclones, saddles, slopes, ridges, troughs, and secondary lows (Colombo, Caccamo, & Magazù, 2018, p. 196).  Who knew?  

Modern weather maps are informed by “a multitude of systems unknown at the time of Le Verrier, that includes super-fast computers, satellites, data networks, electronic meteorological instruments, radar and more other tools” for planetary-size awareness of weather (Colombo, Caccamo, & Magazù, 2018, p. 198).  In a visual depicting this, there are data from ocean data buoys, aircraft, weather ships, polar orbiting satellites, geostationary satellites, weather radars, upper-air stations, and surface stations (both automated and human-manned).  A map of the global surface meteorological station network shows a world dotted with stations (p. 199). The co-authors describe the tools used to capture upper atmosphere data, including helium-filled balloons released “simultaneously from all the upper observation stations of the world at 00 and 12 UTC” to capture “temperature, humidity and pressure data” along with wind data to receiving stations (p. 202).  For all this sophistication, it turns out that paper recording has been in use up until just the last few years when weather observers went fully computerized.  The co-authors explain:  

The process of production of maps at high altitude does not differ much from the one to the ground:  in place of the surface pressure, the height of the isobaric surfaces are represented. For this reason, the upper maps are also called absolute topographies and altitudes are expressed in geopotential heights, a term that is obtained from the relationship between geopotential (the work necessary to overcome the force of gravity and move upwards, to a certain height), a unit mass of air, and gravity at sea level. (Colombo, Caccamo, & Magazù, 2018, p. 205)  

Also, the authors apply wavelet analysis to explore temperature time series data (Colombo, Caccamo, & Magazù, 2018, p. 211).  

Long Time Scale Climate Data

S. Magazù and M.T. Caccamo’s “Fourier and Wavelet Analyses of Climate Data” (Ch. 10) describes a case study involving long-term climate data and the signal analysis of periodicities in that climate data.  The co-authors apply Fourier Transform (FT) and Wavelet Transform (WT) to the data from glacial ice “carrots” from Antarctica and find time intervals (based on glacial maxima) around every 100000 years, which they connect with “the variation of the eccentricity of the Earth’s orbit” (2018, p. 225).  In their work, they observe, “Moreover, between 1.8 and 1.3 million years ago, glacial maxima repeat about every 41,000 years in agreement with the cycle of variation of the Earth’s axis inclination (p. 225), and periods of high and low eccentricity.  

This work shows the layering on of scientific expertise, with findings in different areas affecting other sciences—both peripheral and far.  

Multi-disciplinary Approaches to Soft Condensed Matter Physics (SCMP)

Domenico Lombardo and Mikhail A. Kiselev propose “A Conceptual Map for Multidisciplinary Approaches of Soft Condensed Matter Physics in Postgraduate Programs” (Ch. 11).  They explain the difference between “hard condensed matter physics” (which engage “materials with higher structural rigidity such as crystalline solids, glasses, insulators, metals, semiconductors and new quantum materials” to “describe their structural, electronic, and transport properties”) and “soft condensed matter physics” [which engage “macromolecular self-assembly processes and structural organization in the soft interacting materials” and includes “the study of colloidal dispersions, soft glasses, liquid crystals, polymers and polyelectrolytes, complex fluids and biological systems (such as bio-membranes and cells)” and have implications in “material science, biotechnology, nanomedicine, food and personal care products research” and others] (Lombardo & Kiselev, 2018, p. 246).  

Lombardo and Kiselev suggest harnessing the synergies of various sciences to promote interdisciplinary advances in student learning.  In their model for post-graduate students, student agency is encouraged—such as through their enablement of selecting particular disciplines for deeper study. Both teachers and post-graduate students need to acquire “broader interdisciplinary conceptual knowledge” and “integration of skills with up-to-date contents for an integrated learning approach in which all are advancing together (Lombardo & Kiselev, 2018, p. 245).  This work has clear industrial applications.  

The study of condensed matter physics in postgraduate academic programs is mainly motivated by the demand of search for new (nano)materials with unexpected properties. This research field, which is one of the largest subfield (sic) of modern physics, seeks to understand the structural and dynamic phenomena arising from the interactions of particles (~1023 atoms, molecules or macromolecules) in simple or multicomponent macroscopic materials. (Lombardo & Kiselev, 2018, p. 246) 

In this chapter is a visual conceptual map “describing the main theoretical methods for the study of the properties of soft interacting materials” (Lombardo & Kiselev, 2018, p. 249).  This includes system modeling, real interaction experiments, mathematical schemes, molecular dynamic simulations, the application of analytical theories, and statistical physics. 

Exploring the Thermal Radiation of a Real-Body 

In applied physics, researchers often strive to under the nature of materials as accurately and precisely as possible.  Anatoliy I. Fisenko, Vladimir F. Lemberg, and Sergey N. Ivashov’s “Generalized Wien’s Displacement Law of Thermal Radiation of a Real-Body” (Ch. 12) describes “the temperature dependence of the ‘generalized’ Wien displacement law for the several metals, carbides, luminous-flames, and the quasi-periodic micro-structured silicon coated with 100 nm thick Au film” (2018, p. 259).  They explain their research question this way:  “When a lecturer conducts a course of thermodynamics or statistical physics on a topic related to the thermal radiation of a blackbody, students very often ask how we can generalize the well-known Wien displacement law to the case of thermal radiation of a real-body?  Or, in other words, can we determine the true temperature of real-body radiation by measuring the maximum of the spectral energy density?” (p. 260) [A “blackbody” is “a theoretically ideal radiator and absorber of energy at all electromagnetic wavelengths” named thus because “a cold blackbody appears visually black” (according to WhatIs.com) (Rouse, Sept. 2005).]  

Biological Membrane Physics

Biological membranes are an important subject in the study of human and animal health.  To understand them, chemists, biologists, chemical engineers, biophysicists, biotechnologists, physicists, and nano-medicine experts may all play a role in understanding biological membranes.  So assert Domenico Lombardo and Mikhail A. Kiselev in “Physics of Biological Membrane:  An Interdisciplinary Approach to Research and Education” (Ch. 13). They explain the relevance of this field:  

In living systems the cellular plasma membrane represents a selectively permeable interface between the extracellular fluid environment and the internal cell cytoplasm.  This protective barrier of the cell, which is mainly composed of lipids, proteins and sugars, is involved in many important cellular functions such as signaling and sensing, selective transport of matter, cell adhesion and electrical action potential propagation.  In basic research, biomembranes are also important as delivery agents for drugs, enzymes and genetic material through the living cell membranes and other hydrophobic barriers of the bio-nanomaterials. For this reason the study of biological membranes represents a central topic at the crossroads of different disciplines including biochemistry, food industry, biotechnology and nano-medicine (Lombardo & Kiselev, 2018, pp. 278 - 279) 

A variety of techniques, involving high-resolution mass spectroscopy techniques, in vitro experiments, and optical far-field microscopy is applied to this work  (Lombardo & Kiselev, 2018, p. 280).  They describe a basic sequence of lipidomic workflow from acquisition of the biological sample, chromatographic separations analysis, the identification and quantification of lipids, statistical analysis and classification models applied to the data, and then pathways analysis and biological modeling (p. 284).  

Some Apparent Book Production Issues

The substance of this book is relevant and memorable. If there is one quibble, it will have to be with some of the book’s production challenges, which include dozens of typos (across different chapters), a stretched graph (incorrect aspect ratio), an unsourced third-party image, diagrams which break some of the basic rules of visual expression, a lack of consistent visual look-and-feel in a work, and other simple challenges.  

Those who pay attention to bylines may have noticed that the book’s editor co-authored several of the works, and prior publications of his have also been cited in the References lists.  Generally, neither is optimal practice even if understandable.  It can be difficult to attract sufficient interest in a book project.  

It is hard to make the case that publishing and getting paid with an electronic copy of a text is a motivating offer (when most published works require original research, original data, clean writing, clear citations, and a long process of double-blind peer review and critique, sometimes with the prior stretching over a year or longer). Even if the talent pool is a world of practitioners, the world quickly becomes a very small pool when it comes to issues of rare expertise (requiring the combination of all the prior features) and / or rare collaboration to arrive at the rare expertise.  And this does not even address the need for expensive technologies, labs, equipment, and other resources necessary for the work.  

Conclusion

The work of physics education involves inherent challenges.  In a complex world, those who can best engage it are those who can use rigorous ways to engage the facts.  Teachers of physics have an important responsibility in teaching science, reasoning, equipment, and common understandings, without locking learners into any limiting paradigms that could limit what the learners can see.  There are yet additional challenges.  

On one hand there is the need to simplify the educational material, depending on the student level of skills and abilities, to adapt the contents to the available time.  On the other hand, there is the demand for programme updates due to the obsolescence of the contents, and therefore the need to consider recent research developments that are considered relevant. Furthermore, some of (sic) recent developments can be often unknown to some teachers which instead prefer stabilized topics, with definitive results acquired a long time ago.  In other ways, it should be taken into account that many experimental and theoretical results, even if not too recent, can be in contrast with the contents usually proposed in standard university manuals, these latter being often uncritically repeated and therefore usually hard to question. (Caccamo & Magazù, 2018, p. 118)  

There is a critical value in considering effective ways to advance physics education.  In many of the works here, there is not an assessment of the actual learning, so the assumption of the efficacy of the approach is left untested.  Ideally, the learner experience and post-learning performance should be addressed (as compared to a control group).  

Salvatore Magazù’s New Trends in Physics Education Research (2018) introduces some thought-through pedagogical methods and technologies that can advance the work of physics education. The respective works are all quite readable, with clear efforts at providing sufficient explanatory depth to the work.  

The precision of academic writing in science work differs markedly from that of other types of academic writing.  For a non-physics major book reviewer, some of the phrasing has special appeal: “In gases, such as air, only longitudinal waves propagate, since cohesion effects able to recall the medium towards the equilibrium position are negligible” (Cannuli, Caccamo, Sabatino, & Magazù, 2018, p. 158).  With precision writing, some level of sense-making is possible, even for non-experts.  However, simple explorations online show the complexity of the science being addressed.  This work shows physics in the realm of solving hard problems, with speed, with accuracy, with provability, and ultimately, with application to real world challenges and needs, in tough business environments with competitors all seeking advantage and intellectual property (IP) value.  


References

“Meteorology.”  (2019, Jan. 26).  Wikipedia.  Retrieved Feb. 1, 2019, from https://en.wikipedia.org/wiki/Meteorology.  

Rouse, M. (2005, Sept.)  blackbody (definition).  WhatIs.com.  Retrieved Feb. 1, 2019, from https://whatis.techtarget.com/definition/blackbody.  




About the Author

Shalin Hai-Jew works as an instructional designer at Kansas State University. Her email is shalin@k-state.edu.  


Note:  Thanks to Nova Science Publishing for a watermarked digital review copy of the text. The reviewer has no relationship with the publisher.  

Comment on this page
 

Discussion of "Book Review: Practical Interdisciplinary Approaches in Advancing Physics Education"

Add your voice to this discussion.

Checking your signed in status ...

Previous page on path Cover, page 18 of 23 Next page on path