Friday, August 31, 2018

Proton pulses accelerate electrons to 2 GeV ???


AWAKE 360
Protons AWAKE: a 360° view of the AWAKE experiment at CERN. (Courtesy: Maximilien Brice/Julien Marius Ordan)
A beam of protons has been used for plasma wakefield acceleration for the first time, driving electrons to energies of 2 GeV over a distance of just 10 m. The technique was developed by CERN’s AWAKE collaboration and is still preliminary, but it could potentially accelerate fundamental particles to very high energies.
CERN’s Large Hadron Collider (LHC) accelerates protons to 6.5 TeV before smashing them together at a combined energy of 13 TeV. Protons are relatively heavy and comprise three quarks, which means that the collisions produce huge quantities of particles that must be detected and analysed.
reclutamento.infn.it/ReclutamentoOnline/Advertisement
While sifting through the debris of these collisions has led to important discoveries – including the Higgs boson – it is a complicated and data intensive process. As a result, some particle physicists have proposed that the next big collider after the LHC should use lighter, fundamental particles such as electrons and positrons. This would result in much “cleaner” collisions that produce far fewer particles.
The problem is that circular accelerators like the LHC are ill-suited to colliding light fundamental particles. Accelerating charged particles in curved paths causes them to emit synchrotron radiation, and light particles lose much more energy in this process than heavier ones. Therefore, most designs for fundamental particle colliders are linear. The International Linear Collider – proposed for construction in Japan – would need to accelerate electrons for over 11 km to reach 0.25 TeV.

Plasma surfing

Plasma wakefield acceleration offers a very differ way of accelerating electrons over much shorter distances. An intense pulse of particles or laser light is fired into a plasma, separating electrons from ions to create a huge electric field that propagates like a ship’s wake (the wakefield) behind the pulse.  If electrons are injected at precisely the right time, they can surf this wave and be accelerated to very high energies over relatively short distances.
Much as larger wakes can be created by larger ships, larger wakefields can be created in plasmas by using more energetic pulses. Previous experimental demonstrations of plasma wakefield acceleration have used either laser pulses or electron bunches to create the necessary wakefields. Unfortunately, the maximum energy that can be packed into a single laser pulse, for example, is around 1 J, which means complex, multi-stage accelerators would be required to accelerate electrons to the highest energies.
Protons, however, are relatively easy to accelerate and in 2009, Allen Caldwell of the Max Planck Institute for Physics in Munich and colleagues proposed that a 100 micron-long proton bunch could accelerate electrons to over 0.5 TeV in less than 500 m. There was one problem with this scheme – 100 micron-long, ultradense proton bunches do not yet exist.

Self-modulating bunches

The bunches from CERN’s Super Proton Synchrotron used by AWAKE are around 10 cm long so the team first fire a bunch into a plasma, which causes it to “self-modulate” into a series of shorter bunches. “These small bunches are shorter and denser,” explains AWAKE member Matthew Wing of University College London. “Their electric fields are completely in phase, so they constructively interfere to drive stronger and stronger wakefields."
Plasma physicist Sébastian Corde of Laboratoire d’Optique Appliquée in France is impressed: “This solves a lot of potential issues that we have in plasma acceleration,” he says. He cautions, however, that much work remains: “For every 2600 electrons injected, only one gets trapped into the plasma wave and accelerated…Clearly that’s something they’ll have to work on.”By injecting electrons near the back of the bunch, the researchers accelerated them to 2 GeV in just 10 m of plasma. “Our theoretical colleagues have shown that, if you take the LHC bunch as it is right now, you could accelerate electrons to roughly 1 TeV in just over 1 km and get to 6 TeV in about 8-10 km,” says Wing. “We need to do further R&D to demonstrate that it’s possible to get up to those high energies with excellent beam qualities but hopefully this will kick start people to think how this could be incorporated into a design for a future collider.”
Wim Leemans, director of the Berkeley Lab Laser Accelerator Center in California, adds “Having been in [plasma wakefield acceleration] for over three decades, I find it very important that CERN – the major high-energy physics lab in the world – has started investing in this technology”.
The research is described in Nature.

Monday, August 27, 2018

Beyond Pi: 7 Underrated Single-Letter Variables and Constants






No one can deny that pi (π, the ratio of the circumference of a circle to its diameter) is a useful constant—drafted into service every day in furniture workshops, in precision toolmaking, and in middle-school and high-school mathematics classes around the world. π is used to calculate the volumes of spheres (such as weather balloons and volleyballs) and cylinders (like grain silos and cups). The cult status of this little irrational number (abbreviated 3.14 or 22/7) is so significant that March 14 (3.14) is celebrated as “Pi Day” annually. But what about other single letters, Greek and otherwise, that serve as valuable mathematical and scientific tools? Aren’t they just as important as pi? It depends on whom you talk to, of course. The following is a short list of less-famous but commonly used single-letter constants and variables.

G or “Big G”




  • G (or “Big G”) is called the gravitational constant or Newton’s constant. It is a quantity whose numerical value depends on the physical units of length, mass, and time used to help determine the size of the gravitational force between two objects in space. Gwas first used by Sir Isaac Newton to figure gravitational force, but it was first calculated by British natural philosopher and experimentalist Henry Cavendish during his efforts to determine the mass of EarthBig G is a bit of a misnomer, however, since it is very, very small, only 6.67 x 10−11 m3 kg−1s−2.

Delta (Δ or d)




  • As any student of calculus or chemistry knows, delta (Δ or d) signifies change in the quality or the amount of something. In ecology, dN/dt (which could also be written ΔNt, with N equal to the number of individuals in a population and t equal to a given point in time) is often used to determine the rate of growth in a population. In chemistry, Δ is used to represent a change in temperature (ΔT) or a change in the amount of energy (ΔE) in a reaction.

Rho (ρ or r)




  • Rho (ρ or r) is probably best known for its use in correlation coefficients—that is, in statistical operations that try to quantify the relationship (or association) between two variables, such as between height and weight or between surface area and volume. Pearson’s correlation coefficient, r, is one type of correlation coefficient. It measures the strength of the linear relationship between two variables on a continuous scale between the values of −1 through +1. Values of −1 or +1 indicate a perfect linear relationship between the two variables, whereas a value of 0 indicates no linear relationship. The Spearman rank-order correlation coefficient, rs, measures the strength of the association between one variable and members of a set of variables. For example, rs could be used to rank order, and thus prioritize, the risk of a set of health threats to a community.

Lambda (λ)




  • The Greek letter lambda (λ) is used often in physics, atmospheric science, climatology, and botany with respect to light and sound. Lambda denotes wavelength—that is, the distance between corresponding points of two consecutive waves. “Corresponding points” refers to two points or particles in the same phase—i.e., points that have completed identical fractions of their periodic motion. Wavelength (λ) is equal to the speed (v) of a wave train in a medium divided by its frequency (f): λ = v/f.

The imaginary number (i)




  • Real numbers can be thought of as “normal” numbers that can be expressed. Real numbers include whole numbers (that is, full-unit counting numbers, such as 1, 2, and 3), rational numbers (that is, numbers that can be expressed as fractions and decimals), and irrational numbers (that is, numbers that cannot be written as a ratio or quotient of two integers, such as π or e). In contrast, imaginary numbers are more complex; they involve the symbol i, or √(−1). i can be used to represent the square root of a negative number. Since i = √(−1), then the √(−16) can be represented as 4i. These kinds of operations may be used to simplify the mathematical interpretation in electrical engineering—such as representing the amount of current and the amplitude of an electrical oscillation in signal processing.

The Stefan-Boltzmann constant (σ)




  • When physicists are trying to calculate the amount of surface radiation a planet or other celestial body emits for a given period of time, they use the Stefan-Boltzmann law. This law states that the total radiant heat energy emitted from a surface is proportional to the fourth power of its absolute temperature. In the equation E = σT4, where E is the amount of radiant heat energy and T is the absolute temperature in Kelvin, the Greek letter sigma (σ) represents the constant of proportionality, called the Stefan-Boltzmann constant. This constant has the value 5.6704 × 10−8 watt per meter2∙K4, where K4 is temperature in Kelvin raised to the fourth power. The law applies only to blackbodies—that is, theoretical physical bodies that absorb all incident heat radiation. Blackbodies are also known as “perfect” or “ideal” emitters, since they are said to emit all of the radiation they absorb. When looking at a real-world surface, creating a model of a perfect emitter by using the Stefan-Boltzmann law serves as a valuable comparative tool for physicists when they attempt to estimate the surface temperatures of stars, planets, and other objects.

The natural logarithm (e)




  • A logarithm is the exponent or power to which a base must be raised to yield a given number. The natural, or Napierian, logarithm (with base e ≅ 2.71828 [which is an irrational number] and written ln n) is a useful function in mathematics, with applications to mathematical models throughout the physical and biological sciences. The natural logarithm, e, is often used to measure the time it takes for something to get to a certain level, such as how long it would take for a small population of lemmings to grow into a group of one million individuals or how many years a sample of plutonium will take to decay to a safe level.

Sunday, August 19, 2018

How to build a super-magnet

Super-strong magnets are a relatively recent phenomenon. Before the 19th century, the only magnets available were naturally occurring rocks made from a mineral called magnetite. This began to change after 1819, when the Danish scientist Hans Christian Ørsted discovered that electric currents in metallic wires create magnetic fields, but the real leap in magnet strength did not come until nearly a century later, with the discovery of superconductivity. Superconductors conduct electricity with perfect efficiency, which is a huge advantage for making strong magnets: today’s most powerful commercially available superconducting magnets can produce a stable field of up to 23 T, which is more than 2000 times stronger than the magnet on your fridge.




The 32 T magnet lowered into its cryostat

In December 2017 improvements in low-temperature-superconductor (LTS) magnet technology, together with advances in high-temperature superconducting (HTS) materials, produced another change in magnet development. The successful demonstration of a 32 T all-superconducting magnet by the National High Magnetic Field Laboratory (NHMFL) in Florida, US, was a significant milestone in the field. The new super-magnet is expected to become available to users in 2019, and its high, stable field will help scientists break new ground in studies of nuclear magnetic resonance, electron magnetic resonance, molecular solids and quantum oscillation studies of complex metals, among other areas. In the longer term, the wider availability of such strong magnetic fields is also expected to enhance our understanding of superconductors and nanomaterials, leading to new nano-devices and applications.
There are, however, several challenges associated with designing and manufacturing magnets capable of producing fields of > 25 T. The amount of stored energy in systems like these is huge, and managing the electromagnetic forces and stresses associated both with energizing the magnet, and with allowing it to warm up and “quench” (as the transition from superconducting to resistive behaviour is known), is no easy task. Producing high-quality, uniform LTS and HTS wires and tapes by the metre (and indeed by the kilometre) is also difficult. The success of the 32 T final design did not happen overnight; rather, it was the product of intense engineering and materials development over nearly a decade.

Finding the right superconductor

A superconducting magnet of ≥25 T typically comprises an outer magnet (or “outsert”) made from LTS materials and an insert that uses HTS materials. In the 32 T NHMFL magnet, the outsert section consists of three coils of niobium-tin (Nb3Sn) and two coils of niobium-titanium (NbTi), all supplied by Bruker-Oxford Superconducting Technology. Together, these coils deliver a field of 15 T via a 250 mm wide-bore magnet. The insert section delivers 17 T in a 34 mm cold bore developed by NHMFL using advanced HTS superconducting tapes manufactured by Superpower Inc. The two sections were integrated by a team of scientists at the NHMFL, supported by a team at my company, Oxford Instruments Nanoscience, which also developed the magnet’s outsert and its cryogenic system.

high-temperature superconducting insert coil

The dual-component design of high-field magnets is necessary because LTS-only magnets cannot produce a field much beyond 21 T at 4.2 K (or 23 T at 2.2 K) due to the physical limitations of LTS materials. For example, NbTi was developed in the 1970s and has been the “workhorse” of superconducting magnets ever since. However, NbTi material can only function as a superconductor at fields of up to 10 T at 4.2 K (and not more than 11.7 T at 2.2 K) for magnets with narrow bores of less than 60 mm. For larger-bore magnets, the maximum field is even lower, limiting the material’s usefulness in high-field magnets. Coils made from Nb3Sn material can remain superconducting at up to 23 T at 2.2 K, much higher than is possible for NbTi, but they also need to have a very fine filament-like structure to prevent a phenomenon known as flux jumping that dissipates energy in the superconductor and can cause the coil to quench prematurely. Hence, the manufacture of Nb3Sn wire has to be done with stringent quality-control procedures in place to ensure that it will perform stably at high fields.
HTS materials, in contrast, can carry significant current at 4.2 K, and they remain superconducting far above the magnetic field limits inherent to niobium-based wires, having shown good performance in fields of up to 45 T (which can be generated by magnets that incorporate resistive as well as superconducting coils). However, these materials come with additional challenges in terms of their cost, reliability and acceptance within the user community. The first generation of HTS wire was made from a cuprate-based superconductor, bismuth strontium calcium copper oxide (Bi-2212). This material performs consistently regardless of magnetic field orientation, but manufacturing it requires the material to undergo a very precise heat treatment in oxygen, after which it becomes extremely brittle and therefore highly strain sensitive. The NHMFL 32 T magnet uses a second-generation HTS wire made from YBCO, a superconducting ceramic composed of yttrium, barium, copper and oxygen. Production of YBCO wires and tapes has increased during the last few years, and their mechanical properties are better than for Bi-2212, but they display anisotropic effects with respect to field orientation that need to be accounted for in magnet design. They also require more sophisticated quench-management systems. In short, both materials have their challenges, but also some advantages, and are strong candidates for high-field magnets.

Managing stored energy and stress

For the superconductors in the magnet’s insert and outsert to operate, both components must be kept fully immersed in a bath of liquid helium at 4.2 K. A scant few μJ of additional energy – equivalent to the potential energy of a pin dropped from the height of just a few centimetres – would be enough to raise the temperature above the point where the coils become resistive, and the magnet undergoes a quench. When that happens, the helium boils off and all the energy stored in the magnet is released very quickly, risking damage to its structure if the quench process is not properly managed. The potential for damage is significant, too: at the maximum field of 32 T, the energy stored in the NHMFL magnet is more than 8.3 MJ, approximately equal to the energy in 2 kg of TNT.
High-field magnets already play an important role in enabling scientific research and development
How can you manage the dissipation of 8.3 MJ of energy in a way that won’t cause terminal damage either to the magnet or to objects around it? The solution is a quench-management system that releases the energy very quickly, but in a way that avoids magnet damage through thermal gradients or excessive voltages in the coil. This system (a dedicated and patented solution developed by Oxford Instruments) ensures that, during failure mode, all of the stresses on the coils and their voltages are kept within design limits to ensure no excessive challenge to material performance. For example, specially designed coil heaters are used to make the magnet coils resistive, which disperses the energy from the quench evenly and safely, and prevents sections of the coil being damaged by localized excessive voltages. In addition, the safety of the integrated magnet system is maintained by sensors that monitor small variations in temperature, voltage, current or the physical position of wires and tapes. Some of this information is then fed into a central processor, which determines whether a “real” quench event is occurring and, if necessary, discharges the stored energy in a timely and safe manner.
In addition to storing large amounts of energy, high-field magnets also experience huge degrees of electromagnetic stress. For a given magnet, the quantity of mechanical stress increases quadratically with the field strength, and at 32 T these stresses add up to more than 300 tonnes, with a magnetic pressure of more than 250 MPa. Traditional ways of reinforcing magnetic coils involve impregnating them with wax to create a self-supporting structure that prevents the Lorentz force on the coil from damaging them during operation, or mechanical movement leading to repeated coil quench. However, at very high fields this is not enough. Instead, the coils for the LTS outsert were evacuated in a special vacuum chamber, and the chamber was then brought back up to atmospheric pressure after epoxy resin had been introduced to replace the air voids within the coils. This process makes it possible for the coils to withstand forces exceeding 300 tonnes.

Prospects for discoveries

High-field magnets already play an important role in enabling scientific research and development. Many significant discoveries, including several that were subsequently honoured with Nobel prizes in physics, chemistry or medicine, have been made with the help of strong magnetic fields. High-field superconducting magnets are also an essential technology for particle accelerators and colliders, and they play a critical role in fusion devices such as the International Thermonuclear Experimental Reactor (ITER).
In my view, though, some of the most exciting future applications for devices like the 32 T NHMFL magnet can be found in the field of nanotechnology. High-field magnets will enable the study and manipulation of atoms and molecular structures in the range 1–100 nm, helping us to understand how the properties of materials at this scale can be improved to achieve greater strength, enhanced reactivity, better catalytic function and higher conductivity. In combination with low temperatures, high fields are also a crucial aid in studying, modifying and controlling new states of matter. Superconducting magnets provide these high magnetic fields without the enormous power consumption and large infrastructure requirements of resistive magnets. The new, even more compact 32 T magnet will reduce the associated running costs still further, making high-field research accessible to a broader range of scientists and institutions.

Friday, August 17, 2018

Beating Braess’ paradox to prevent instability in electrical power grids


Power lines
Extra capacity: secondary frequency control could be the key to avoiding Braess’ paradox in electrical power grids. (Courtesy: iStock/Chalabala)
Researchers have calculated that Braess’ paradox – whereby adding transmission capacity to a network can degrade the network’s performance – can be avoided in electrical power grids by implementing the appropriate secondary frequency control. If the result can be demonstrated in real networks, it could help engineers build resilient networks that are able to integrate new sources of energy.
It seems reasonable to expect that adding new transmission lines to a power grid will improve its performance. However, in the 1960s the German mathematician Dietrich Braess showed that adding roads into some traffic networks actually increases congestion – an effect dubbed Braess’ paradox. Since then scientists and engineers have shown that the paradox can also apply to similarly interconnected nonlinear, dynamical general electricity grids. These grids are essential for modern life – and are constantly evolving, particularly with increases in renewable energy generation – so understanding the implications of Braess’ paradox is essential
Now scientists in Spain and Germany have joined forces to gain a better understanding of how to mitigate the effects of Braess’ paradox in electricity networks. Benjamin Schäfer and colleagues at the Technical University Dresden, brought with them expertise in Braess’ paradox, whereas Eder Batista Tchawou Tchuisseu and colleagues at the Institute for Cross-Disciplinary Physics and Complex Systems in Mallorca contributed their expertise in controlling electric network failure.
Controlling frequency
Electrical power grids operate in alternating current (AC) mode and all generators in the grid operate at the same frequency (50 Hz in Europe) and are synchronized across the network.
bit.ly/ParkSystemsNanoscientificSymposium2018Advertisement
“Frequency stores information about the grid, telling you something about the balance of the grid,” explains Schäfer. “So, if frequency starts to drop this typically indicates that there is a shortage of supply,” he adds. “Think of a generator rotating at given frequency, if you draw energy out of the system it is taken from the rotating energy of this rotor, effectively slowing the generator down and causing grid frequency to drop.”
There are several mechanisms used in a grid to control frequency fluctuations caused by energy shortages. A few seconds after a frequency drop occurs, primary control kicks into action to stabilize the frequency. However, primary control is unable to restore frequencies to 50 Hz and this leaves the grid susceptible to another drop in frequency.
“In our current system, not all power plants contribute to all types of control, with only a few dedicated power plants having this very fast primary control response,” says Schäfer. He adds that the slower-responding secondary control, which integrates the stabilized low frequency to restore it back to 50 Hz, is rarely considered in dynamical modelling by either physicists or engineers.

Curing Braess’ paradox

Schäfer and colleagues, however, were keen to study secondary control because primary control fails to prevent Braess’ paradox affecting electricity grids. The team did an analytical investigation of the general stability of a secondary controller in a simple electric network model consisting of two connected nodes. Then they simulated the addition of lines to a more complex system, invoking Braess’ paradox.
“With no secondary control, adding a line causes a sudden blackout, a prototypical Braess’ paradox. But in the same network with secondary control, adding a line has no effect,” says Schäfer.
Schäfer admits that it had been a challenge to use analytical insight to explain how the secondary controller cured Braess’ paradox in all simulations, and so the team has also proposed additional intuitive explanations.

Ensuring future grid stability

The team describes its findings in the New Journal of Physics. Schäfer says that when the paper was being considered for publication a reviewer asked a very useful question: “How much control do you need?” For example, does secondary control need to be applied at every node, both supplier and consumer? The team tried simulations with varied levels of control and found that secondary control had to be implemented at all nodes for Braess’ paradox to be reliably cured in a network. 
“We firstly warn that new line installations should be double checked to ensure they don’t cause Braess’ paradox,” says Schäfer. “Our second recommendation is that to prevent Braess’ paradox it’s important to distribute secondary control.” The importance of secondary control distribution in local area nodes as well as in generators, led the team to encourage the involvement of energy consumers in future energy grid plans, for instance, implementing demand control schemes that incentivize households to use energy at times of low demand.
Looking towards the future, Schäfer and his team are keen demonstrate the prevention of Braess’ paradox experimentally. This, they hope, will go some way to demonstrate to engineers the importance of understanding Braess’paradox as a collective phenomenon. “Convincing engineers is important and then we can focus on counter-measures, experimentally figuring out which parts of the network we need to control to guarantee stability.”

Wednesday, August 15, 2018

Organic solar cells break new efficiency record



Tandem solar cells
Solution-processed tandem solar cells with 17.3% efficiency

Organic solar cells could be as efficient as those based on inorganic materials such as silicon and perovskites. This is the new finding from researchers in China who have determined which photoactive material combinations are best for making “tandem” devices. Test cells made in the laboratory reach power conversion efficiencies (PCEs) of 17.3%, a value that is significantly higher than the current 14 to 15%. This value might even reach 25% with further optimization, they say.
Organic photovoltaics (OPVs) show much promise for next-generation solar cells thanks to their low cost, and the fact that they are flexible and can be printed over large areas. Indeed, researchers have succeeded in improving the PCE of these cells from around 5% to 14-15% over the last decade by making so-called tandem cells in which photoactive layers with complementary light absorption characteristics are stacked on top of each other. In this way, they have made cells that absorb over a wider range of sunlight wavelengths than single materials. This is because the photoactive organic materials in each subcell can be designed with different but matching energy bandgaps.
Although 14-15% is impressive, this value lags behind that of photovoltaic platforms based on inorganic materials, which for their part boast PCEs of between 18 to 22%. One of the main reasons for the relative low PCE of OPVs is the limited sunlight absorption range of materials used to make the rear subcell in the devices. Indeed, most of these materials can only absorb photons with energies of around 1.3 eV (90 nm), which means they miss a large part of the solar spectrum.

Screening for the most efficient photoactive materials

A team of researchers led by Yongsheng Chen of Nankai University in Tianjin has now developed a semi-empirical model to predict which materials work best together in tandem cells. “We screened for the most efficient photoactive materials and then found the optimal match for both the front and rear subcells in these devices,” explains Chen. “Our analyses are based on previous theoretical works and state-of-the-art experimental results.
“Thanks to our calculations, we were able to make solution-processed two-terminal monolithic tandem OPV cells with a remarkable new record PCE of 17.3%.”
bit.ly/ParkSystemsNanoscientificSymposium2018Advertisement
“According to our analyses, the OPV of 17.3% could be increased further to 25% and these materials could be as good as other solar technologies,” Chen says. “OPVs thus show great potential for commercial applications.”
The team, which includes researchers from the National Center for Nanoscience and Technology in Beijing and South China University of Technology in Guangzhou says that it is now busy looking for even better material combinations for making OPV tandem solar cells and improving their stability. “We found that while the initial stability tests show that the devices are stable and degrade by only 4% after 166 days, their long-term stability needs further testing and optimization,” says Chen.

Tuesday, August 14, 2018

Super window’ uses krypton to reduce energy costs


Thermal image of a house
Cooling off: large quantities of heat are lost through windows. (Courtesy: iStock/gabort71)
The Lawrence Berkeley National Laboratory(Berkeley Lab) has joined forces with window companies Andersen Corporation and Alpen High Performance Products to resurrect its “thin triple” super window design, first patented in 1991. The new collaboration aims to commercialize the super window, which is at least twice as insulating as 98% of the windows for sale today – potentially halving the estimated $20 billion in heating energy lost every year by windows in the US.

Window of opportunity

The new design is an evolution of the common double-glazed window. It has two layers of 3 mm thick glass that sandwich a third layer of very thin glass that is less than 1 mm thick. A standard low-emissivity coating that helps to block long-wave infrared rays is applied to the thin central glass. Finally, argon that would usually fill the double-glazed window cavity to reduce heat transfer is replaced by krypton, which has superior insulating properties.
bit.ly/ParkSystemsNanoscientificSymposium2018Advertisement
Ahead of its time in 1991, the super window garnered “little commercial interest,” according to recently retired Stephen Selkowitz, former leader of the Windows and Daylightingresearch group at Berkeley Lab. But after gathering dust for 22 years, rising public awareness of climate change and green technology prompted the researchers to revisit the concept five years ago. In the 1990s, the costs of both the thin central glass and krypton were too high to be viable. Now however, market forces have seen these prices dramatically fall, convincing the window companies to invest in technology.
Unlike other highly insulating designs that have struggled – such as triple, vacuum and aerogel glazing – the super window is almost the same thickness and weight as a double-glazed window, requiring no costly redesign of window frames. What is more, only small changes in the window manufacturing process are needed.
“The key here is to provide a path for window manufacturers to make the transition to dramatically new product capabilities, but without the cost and risk of a full production line makeover,” adds Selkowitz. “We call it a ‘drop in’ replacement for the existing insulating glass unit.”
Although Selkowitz warns that “it will likely be a year before we collectively have done enough ‘due diligence’ for the window companies to decide to invest in creating of a marketable product”, simulations and prototypes built and tested in the lab suggest widespread adoption of the super window could have a significant impact on home energy efficiency. Indeed, the windows exceed the performance of a well-insulated wall over the course of a year, and can even help heat homes located in colder climates by locking in heat from the Sun.

Modular system builds effective vaccines



The University of Oxford researchers who developed the mi3 nanoparticle.

There are two ways to make vaccines: use a virus or bacterium and kill or modify it; or produce a single part of the virus or bacterium, a so-called antigen, which trains the immune system to recognize the whole pathogen. The first approach carries a small risk of negative side effects; the second is less effective. Theodora Bruun and her colleagues at the University of Oxford have developed a nanoparticle that promises to be both safe and effective (ACS Nano10.1021/acsnano.8b02805).
Their particles display 60 copies of the antigen. This high concentration of antigens in one place makes it easier for the immune system to recognize them, compared with recognizing 60 individual antigens. While such particles are not new, previous versions were not soluble or stable enough and had a low production yield.

Stable particles from volcanoes

The new nanoparticle that the Oxford University team developed is based on a protein from the bacterium Thermotoga maritima, which was found in hot springs near a volcano. In fact, it is the only bacterium known to live at such high temperatures.

mi3 nanoparticle

The researchers computationally optimized a thermostable protein from this bacterium to form a dodecahedron. The nanoparticles contained 60 copies of this optimized protein, called mi3. Bruun and co-workers showed that these particles are highly stable and survive temperatures of up to 75 °C, freezing and freeze-drying. The production yield was 10-fold higher than for a previously used particle.
Every one of the 60 copies of mi3 per particle is connected with a “SpyCatcher” molecule, which can covalently bind a “SpyTag” molecule, simply through mixing the two components. By attaching antigens to the SpyTag, a variety of vaccines can be made based on the mi3 particle.
Such a modular system has many advantages. “Developing a modular and robust nanoscaffold … could contribute to major challenges in human and animal health, including vaccines to rapidly evolving pathogens (e.g. HIV, malaria) or zoonotic outbreaks (e.g. Ebola virus, Rift Valley fever),”  the authors explain. The particle core could be stockpiled and combined with relevant antigens to enable a rapid response to outbreaks.

But does it work?

To test whether the new particle is suitable for vaccination, the researchers attached a malaria antigen using the SpyCatcher-SpyTag docking system. Compared with individual antigens, the antigen-decorated particle resulted in a more robust immune response, producing more antibodies (one of the two systems of specific immune defences in the body; the other being killer cells). The particle not only produced more antibodies than the individual antigens, but the antibodies produced were more effective, binding the antigen tighter.
So far, it seems that the new nanoparticle developed by Bruun and her team may prove a promising new tool to develop vaccines.