Use the RIGHT/LEFT buttons to navigate between topics.
Use the menu button at the bottom-left corner of the screen to access any of the resources.
The Dark Matter project is a collaboration between:
The University of Birmingham
Istanbul Technical University (MIAM - Center for Advanced Studies in Music)
CERN
The Birmingham Ensemble for Electroacoustic Research (BEER)
art@CMS
QuarkNet
Team members:
Thomas McCauley (Physicist)
Kostas Nikolopoulos (Physicist)
Maurizio Pierini (Physicist)
Konstantinos Vasilakos (Composer)
Scott Wilson (Composer)
Content and Website design:
Emma Margetson
Milad K. Mardakheh
Particle physics (also known as high-energy physics) is a branch of physics that studies the nature of the particles that form matter and radiation. Particle physics investigates the smallest detectable particles and the fundamental interactions necessary to explain their behaviour. It also looks for entirely new particles and interactions.
It was thought at one time that atoms were the smallest fundamental particles. By "fundamental" we mean that they are not compsed of smaller particles. We know now that atoms have a tiny but dense, positive nucleus and a cloud of negative electrons (e-). The nucleus consists of protons (p+), which are positively charged, and neutrons (n), which have no charge. The protons and neutrons themselves are not fundamental particles. They are composed of even smaller particles called quarks.
The theory called the Standard Model of Particle Physics (often just called the Standard Model) explains what the world is made up of and what holds it together.
The fundamental parts of the model include:
All stable, known matter is made up of quarks and leptons. In fact, all of atomic matter is made up only of up and down quarks (combinations of which make up protons and neutrons) and electrons.
Interactions between particles can be thought of in terms of exchanging force carrier particles. There are three fundamental forces in the Standard Model:
There is a fourth force, gravity, which is not included in the Standard Model. How gravity fits together with the other three forces is one of the fundamental questions in physics.
Quarks
Quarks only exist in groups with other quarks and are never found alone.
The heaviest known fundamental particle is the top quark, which is as heavy as a gold nucleus.
Composite particles made of two or more quarks are called hadrons. Protons and neutrons are examples of hadrons.
Leptons
The best-known lepton is the electron (e-). The other two charged leptons are the muon(μ)
and the tau(τ), which are charged like electrons but have more mass. The muon is heavier
than the electron and tau is heavier still.
The other leptons are the three types of neutrinos (ν).
They have no electrical charge, very little mass, and are not easily detected.
For every type of matter particle found, there also exists a corresponding **antimatter** particle, or **antiparticle**. their masses are the same, but properties such as charge are opposite. For example an electron is negatively charged whereas an anti-electron (called a positron) is positively charged. When a matter particle meets its antiparticle they annihilate and convert into pure **ENERGY** in the form of photons.
The visible matter we are familiar with only makes up less than 5% of the universe. The rest of the matter does not interact via the electromagnetic force. This means that this unknown matter does not absorb, reflect or emit light, making it extremely hard to spot, and has therefore been given the name "dark matter". Researchers have been able to infer the existence of dark matter only from the gravitational effect it seems to have on visible matter.
Dark matter it and in of itself makes up only another 27% of the universe. Something called dark energy is believed to make up the rest. So most of what makes up the universe is of an unknown nature and "dark".
One way particle physicist search for new particles and interactions is to smash other particles together and to study what happens. This is an extremely simplified way to describe it, but it's not inaccurate.
A good analogy of how physicists study particles through colliding is the car crash example. Imagine a person wanted to look inside cars. By crashing two cars together at very high speeds, we can break the cars apart and see inside. In the same way, physicists crash two particles together in order to break them and study the inside.
Reality is even stranger, as what really could happen is that the pieces of the car combine to make entirely new things that weren't even originally part of the car. It could be like crashing togther two cars head-on and producing a dinosaur! It's best not to take this analogy too far though.
One way to explain this through the well-known equation E = mc2. Mass is simply a form of energy. Matter can be converted into energy and vice-versa. As stated before, if one combines an electron and a positron they annihilate and leave behind energy. This energy can then be used to form new particles, but only with a combined mass up to the initial energy. If one gives the initial particles more and more energy and collide them, then more and heavier particles can be created from their collisions.
One way particle physicists look for dark matter is to search for entirely new particles, ones that could possibly be dark matter.
Accelerators
The particles are given more and more energy in a process known as acceleration, in which the particles' speeds
are increased more and more until they can nearly (but never reach) the speed of light c.
At the points where the particles collide are detectors that record the "debris" from the collisions.
Detectors
A detector (actually multiple detectors) is large complicated device that consist of layers of material that exploit the
different properties of particles to catch and measure the energy and momentum of each one. In general, they
can detect the stable particles left over from decays of unstable (and perhaps more interesting!) particles like W, Z, and Higgs bosons
that get created in the collisions.
Events
The detector are like massive digital cameras that take snapshots of the collisions (which happen billions of times a second).
Each snapshot is called an event. Each collected event is analyzed and "reconstructed" to see what was produced in the event.
The most powerful accelerator ever built is the Large Hadron Collider (LHC) at CERN in Geneva, accelerating protons and colliding them with a total energy of 13 TeV.
It accelerates protons to nearly the speed of light -- in clockwise and anti-clockwise directions -- and then collides them at four locations around its ring. At these points, the energy of the particle collisions gets transformed into mass, spraying particles in all directions. At each of these points are large detectors. One such detector is the called the Compact Muon Solenoid or simply CMS.
A few facts about the Large Hadron Collider:
The Large Hadron Collider accelerates protons to nearly the speed of light, in clockwise and anti-clockwise directions, and then collides them at four locations around its ring. At these points, detectors such as CMS detect and measure the "debris".
The "visible" particles from these collisions such as electrons, muons, and photons come from the decay of heavier unstable particles such as W, Z, and Higgs bosons. Quarks are produced as well but quickly interact with other quarks to form "jets" of particles.
By measuring precisely as possible properties of these electrons, muon, photons, and jets we can reconstruct the properties of the particles that produced them such as their mass. This includes measuring how much energy they have, their charge (where applicable), where they were produced, and in which direction they went.
The protons in the collisions travel along the beam pipe (depicted in the image below; this is along the z axis of our coordinate system). The detectors themselves form a "cylindrical onion" of layers of detectors in order to detect all that comes out of the collision. One such layer (the electromagnetic calorimeter, which measures energy) is shown in blue below.
Before each collision, the protons travel along the direction of the Large Hadron Collider beams, and not in directions perpendicular to the beams (which are defined as the x and y directions). This means that their momenta in these perpendicular directions – their "transverse momentum" – is zero. A fundamental principle of physics is that momentum is conserved (constant) and so, after the collision, the sum of the transverse momenta of the products of the collision should still be zero. Therefore, if we add up the transverse momenta of all the visible particles produced in the collision and find it not to be zero, then this could be because we have missed the momentum carried away by invisible particles.
The objects and parameters you will find in the data include:
Sound synthesis is the technique of generating sound, using electronic hardware or software, from scratch. The most common use of synthesis is musical, where electronic instruments called synthesizers are used in the performance and recording of music.
Sound is the perceived vibration (oscillation) of air resulting from the vibration of a sound source. Vibration at a regular (periodic) rate can be perceived as a pitch. A common example of a pitch is the A note that orchestras tune to. This is called A440 as it corresponds to 440 cycles of vibration per second (its frequency). Sounds consisting of vibration at only one rate are called sine waves. In the real world, however, sounds usually consist of multiple vibrations. We can describe such complex sounds in terms of the sum of simpler vibrations (partials) at different rates and loudnesses (amplitude). Each partial is a simple sine wave (often called a pure tone) with its own respective frequency and amplitude.
This oscillator creates sound through looping this waveform at a particular frequency. The shape of its waveform can change the sound produced which furthermore changes the timbre (tone colour) of the sound:
1. Sine Wave
2. Square Wave
3. Triangle Wave
4. Sawtooth Wave
An envelope describes how a sound changes over time in terms of its amplitude (loudness). Different instruments have different shapes, with for example long, slow starts (like violins slowly fading in), or immediate ones with a slower fadeout (like a drum). Using an ASDR envelope we can control and tailor the sound of the synthesizer as we prefer using the parameters below:
Attack is the time taken for initial run-up of level from silence to the loudest level.
Decay is the time taken for the subsequent run down from the attack level to the designated sustain level after the initial attack.
Sustain is the level during the main sequence of the sound's duration. This corresponds to the held part of a sound in instruments such as strings or winds.
Release is the time taken for the level to decay from the sustain level to silence after the note is released. This can be very quick, or fade away slowly, as in a bell for example.
While, attack, decay, and release refer to time, sustain refers to level.
Examples:
1.Short, attack sound.
2.Long, sustained, low sound.
3.Sequence sound.
Other parameters included in IPSOS:
Detune: This describes the effect heard when tuning one oscillator sharp or flat in respect to a second oscillator. This produces a fattening of the sound or it may produce a harmonic effect if the interval of the tuning is wide enough.
MIDInote: Musical pitch (how low or high). Pitch of the pressed key with a value between 0 and 127. Higher values correspond to higher pitches. 'Middle C' on a piano is 60.
Duration: Amount of time a sound will play for.
Chord: Sonify all particles simultaneously.
Sequence: Sonify all particles one at a time in order, perhaps creating a melody or fragment.
Like the one in the above image, data visualisation displays communicate information (the data) through visual means, e.g. charts, graphs, diagrams, etc. An auditory display is any display that uses sound instead of images (dots, lines, shapes, etc.) to demonstrate the data. Sonification is the transformation of data of any kind (numbers, images, text) into non-speech audio, to represent information.
Human beings naturally have a superior capability to recognize changes and patterns in the different properties of sound through time, such as pitch (frequency), loudness, timbre, texture, etc. This is called Auditory Perception. Sonification, takes advantage of this ability and translates data relationships into changes in sound properties so that they could be understood by the listener.
A very simple example of sonification is a doorbell! The information, which is the fact that someone is at the door, is being transformed into a distinctive sound so that whenever we hear it, we can immediately interpret and understand it.
Below is another simple example of sonification. Listen to how the pitch of the sound changes according to the position of the y variable as we move along the x axis on the parabola graph.
Sonification is a very useful and also common process in our daily lives. From the simplest of functions such as tapping on a watermelon in order to find out whether it is ripe or sweet, to alert sounds produced by different technologies and devices such as alarms, phones, computers, cars, etc. to analysing changes and patterns in complex data, we use and rely on sonification in a wide variety of jobs and tasks.
Alarms, alerts, and warnings
Alerts and notifications are sounds used to indicate that something has occurred, or is about to occur, or that the listener should immediately attend to something in the environment. Alerts and notifications tend to be simple and particularly overt. For instance, the beeping sound of the microwave is a sonification which indicates that the cooking time has finished.
Status, process, and monitoring messages
There are situations in which the human listener needs to constantly be aware of the current or ongoing state of a system or process. For example, surgeons needs to be aware of the heart-rate of patients at all times during surgery, and therefore use heart-monitoring systems which in addition to visualisations, use sonification to represent heart beats.
Data exploration
This is what is generally meant by the term “sonification”, and the intention is to convey information about an entire data set or relevant aspects of the data set. Sonifications for data exploration differ from status or process indicators in that they use sound to show how the values in the data are connected to one-another rather than giving information about a momentary state such as with alerts and process indicators.
Art, entertainment, sports, and exercise
Notable among their different applications, sonification and auditory displays have been used to enable the visually-impaired children and adults to take part in team sports, or as a means of bringing some of the experience and excitement of dynamic exhibits to the visually impaired.
In addition, sonifications of events and datasets can be used as the basis for musical compositions, installations and sound-art works. While the designers and/or composers often attempt to convey something to the listener through these sonifications, it is not for the pure purpose of information delivery.
Auditory icons and Earcons
Auditory icons, are short communicative sounds that have an analogical relationship with the process or action they represent. In other words, it is as if the sound that you hear actually sounds like what it is meant to represent. For example emptying the trash folder on your computer making the sound of crumpling up paper, or the example below indicating the flow of water or liquid in a system.
Earcons, on the other hand, use sounds only as symbols for actions or processes; so the sounds do not necessarily sound like the actions or processes. For instance, the simple beeping of your phone when you receive a text message. Below is an example of an earcon representing the action of minimizing or making something smaller.
Audification is the most primary method of direct sonification, whereby waveforms are directly translated into sound. For example, seismic waves, travelling through the Earth’s crust as a result of the vibrations of the tectonic plates over an extended period of time, have been audified so that we can hear actual earthquakes! This approach may require that the waveforms be frequency- or time-shifted [sped up or slowed down] into the range of frequencies which humans can hear and percieve as pitches.
Model-based sonification is a more complex technique of sonification using computer simulations, whereby a virtual model of the data is built which produces sounds according to the relationships within the data, as the user interacts with it. A model, then, is like an instrument that the user ‘plays’ and their interaction drives the sonification.
Parameter mapping sonification represents changes in some dimension of the data with changes in an acoustic dimension (of sound) to produce a sonification. As we have already learned, synthesized sound has a multitude of changeable dimensions or parameters such as waveform, pitch, duration, ADSR envelope parameters, etc. This is the form of sonification which IPSOS uses.
What is a ‘mapping’?
Mapping or data-mapping is the process of creating direct/indirect relationships between two distinct datasets, whereby a change in one dataset would cause a relative change in the other.
Remember the earlier example of the sonification of the parabola graph? This is a parameter mapping sonification since the position parameter of y is directly mapped to the pitch of the sound. We can create a different mapping for the same parabola graph, this time to the loudness (amplitude) of the sound, instead of its pitch.
The term topology is used in many different fields of study and has a distinct meaning and application in each. However, in general mathematics, topology tells us how elements of one set relate spatially to each other.
Now take for example, the sound produced from the state change of water in a whistling tea kettle as it approaches boiling point. With the rise of the water temperature and increase in steam pressure, the frequency (pitch) of the whistling sound also increases until it reaches a point where the user knows it is time to turn off the stove and pour the boiling water into the teacup. Here, we have a simple one-to-one mapping between one parameter, which is the water temperature, and another which is sound frequency/pitch.
One-to-one mappings are not the only kind of mapping data features to sound parameters. A second type, is mapping one data feature (e.g the steam pressure in the same example) to not one but multiple sound parameters at the same time. For instance, waveform, pitch and duration. This is known as one-to-many or divergent mapping
A third type is many-to-one or convergent mapping which is the reverse of the above: Multiple different data features (water temperature, pressure, acidity) are mapped to one sound parameter (pitch) and have a collective effect on it
Parameter-mapping sonification is useful in a wide range of complex applications and tasks including navigation, kinematic tracking, medical, environmental, geophysical, oceanographical and astrophysical sensing. In addition to numerical datasets, parameter mapping sonification has been used to sonify static and moving images. Sonification of human movement, for example, is used in medicine for diagnosis and rehabilitation, and also for athletic training (including golf, rowing, ice-skating, and tai-chi).
Parameter-mapping is one of the most commonly used techniques of sonification in music, and is sometimes also referred to as musification. Here are some examples of musical works that use this technique.
Iannis Xenakis’ mapping of statistical and stochastic processes to sound in his Metastasis (1965) and other works.
Alvin Lucier (above image) played an ensemble of percussion instruments using the alpha waves generated by his brain (EEG sonification), in his piece called Music for Solo Performer (1965).
Charles Dodge composed the work titled, The Earth’s Magnetic Field (1970), where the Kp index, describing the fluctuations of the Earth's magnetic field, caused by solar winds, is mapped to the pitches of both diatonic and chromatic scales.
John Dunn and Mary Anne Clarke composed the extended work called Life Music: The Sonification of Proteins (1999), in which different amino acid and protein folding patterns are mapped to pitch and instrumentation.
Frank Halbig’s Antarktika (2006) translates ice-core data reflecting the climatic development of our planet into the score for a string quartet.
Jonathan Berger’s Jiyeh (2008), maps the contours of oil dispersion patterns from a catastrophic oil spill in the Mediterranean Sea. Using a sequence of satellite images, the spread of the oil over a period of time was, sonified and scaled to provide a sense of the enormity of the environmental event.
Chris Chafe’s Tomato Quintet (2007, 2011) sonifies the ripening process of tomatoes. The sonification process mapped carbon dioxide, temperature and light readings from sensors in each vat to synthesis and processing parameters. Subsequently, the duration of the resulting sonification was accelerated to different time scales.
Choose a Collision Event. There is a drop down menu of events to choose from.
Choose which synthesis parameters are addressed to the constituents of each particle. There will be different numbers of particles for each collision, and they may be of different types (e.g. electron or muon)
Choose whether this sonification is to be played as a chord or sequence.
Choose the synth type:
4.1 Sine
4.2 Square
4.3 Triangle
4.4 Sawtooth
Press PLAY to take a listen to the sound.
Press STOP to stop the sound.
Adjust the mapping range of parameters for the following:
7.1 Attack
7.2 Decay
7.3 Sustain
7.4 Release
7.5 Detune
7.6 Midinote
7.7 Duration
Once you are happy with the sound press the plus button. This will save the sound to a button bottom right and it will change colour to green.
Press the button to hear the sound again.
Create up to 9 sounds. Press the buttons to start to create a rhythm, combination, music sequence from the sounds. You can also use the corresponding keyboard number buttons (1-9) to trigger the sounds.
NOTE: You can currently overwrite the sounds in order if you go over nine.
Activity 1: Explore
Discuss and explain the IPSOS app (30 minutes), showing examples.
Split the class into 3 groups (or more): Proton, Neutron, Electron, Quarks etc.
Each group to create 6 sounds (1 each) and discuss in groups.
Activity 2: Develop
Each group to be assigned a particular energy/ colour/ shape to consider. This will furthermore provide 3 (or more) contrasting groups of sounds.
Participants to explore and create 9 sounds each.
Share within groups the sounds created. Each group to choose and then share 2 sounds with the class to discuss.
Activity 3: Plan
Create a plan/ structure for the performance.
Consider; how will the different groups interact (maybe this can be directed through numbers or atoms colliding?). Will this be 3 separate sections or will these overlap? Will the performers be sat or will they be moving around the space? Does the group need a conductor or will this be random?
Activity 4: Practice
Activity 5: Performance