Probability of a melody complying to a listener’s auditory expectations.

Given a generally accepted probability P about a certain event (A) happening (prior probability, P (A)), how is the probability changed (posterior probability) by additional information (B) about the event?

- Applied to melody
In a melodic line, consecutive notes normally follow a pattern that is pleasing to the ear. Simple melodies follow this pattern more rigorously than more complex or sophisticated creations.

Consider the melodic line, "Che farò senza Euridice" by Gluck:

In its simplest description, the melody may be represented as a sequence of notes:

**{E4, F4, G4, G4, G4, C5, C5, B4, B4, C5, G4, G4, A4, A4, G4, F4, F4, E4, C4, E4, G4, C5, C5, B4, C4, E4, G4, C5, C5, B4}**_{(1)}The duration of the note and the rhythmic aspect of a melody are not parts of this analysis and will be introduced later. The number following the note indicates the octave number as in a piano scale. C1 is the central C, whose frequency is 261.626 Hz.

Two consecutive notes define an interval and we address the following question: what is the probability that individual and succession of notes contribute to a melodic pattern in a manner that is biased towards auditory expectations?

- Auditory expectations
- Intervals and their integer ratios
In the western musical style, and since Pythagoras, intervals which are generally the most consonant to the human ear are intervals represented by small integer ratios.

^{1}Consider two consecutive notes of frequencies f

_{E4}and f_{G4}as the first two notes (1). The ratio ƒ_{E4}/ ƒ_{G4}can be converted to its nearby integer number ratio with small denominator:**ƒ**_{E4}/ ƒ_{G4}≈ Ν_{E4}/ Ν_{G4}_{(2)}The sum of the numerator and denominator in Eq. (2),

*S*(E4, G4) = Ν_{E4}+ N_{G4}_{(3)}is a positive integer which may be used as a "dissonance metric" or an indication of a lack of compliance (or otherwise) to the auditory expectation of a listener, of the E

_{1}-G_{1}interval, measured in semitones of a piano chromatic scale. Let’s call IN(1) the 1st interval, IN(i) being the i-th interval in the sequence. The greater the sum, the smaller the listener's expectation is supposed to be.From now on, we refer to this sum also as "Fractional sum".

Back to Bayes’ theorem:

**P (A given B) = P (B given A) x P (A) / P (B)**that we apply to the following question:

Given a sequence of notes in a melody, what is the probability that the interval between two consecutive notes contributes to the overall probability that the sequence is biased towards auditory expectation?

**P (A given B) -> Probability that the sequence is biased based on the evidence (B) of two successive notes.****P (A) -> Prior, generally accepted probability of evidence A, to be verified by new evidence B.****P (B) -> Degree of confidence in new evidence B:****P (B) = P (B given A) x P (A) + P (B given not A) x P(not A)**_{(4)}Or, more explicitly:

_{(5)}**P (**= prior probability of the melody being biased. Its value for the 1st interval is set at 0.5 in the present algorithm. On subsequent intervals, it is the value found for the previous interval.*bias*)**P (***no bias*) = 1 - P (*bias*)_{(6)}**P (IN(i),**is the probability that the interval under observation occurs with a biased composer. As mentioned earlier in this paragraph, according to our model this probability is inversely proportional to*given bias*)*S*in Eq. (3). The probability model we chose is the sloped distribution shown in the top display of the two figures below.The horizontal axis extends from 0 to 40. For

*S*> 40, the probability density is set at zero (actually, 0.001 for computational reasons). The area under the plot is equal to 1, as it should be for a probability density distribution. The blue circles in the lower figure represent the value of*S*in Eq.(3) of the melody under consideration (Euridice).Figure 1

Figure 2

The histogram above shows that, over 1000 hypothetical songs randomly picked according to the chosen probability density, the number of songs having a certain value within a given interval of the fractional sum of Eq.(3). About 23% have

*S*= 0 - 5 and 8% have*S*= 35 - 40.The equation of the sloped line (probability of sum given bias) in Figure 1 is:

_{(7)}*l*is the range under consideration, 0 ≤_{s}*S*≤*l*. The numerator is the expression for a linear slope in_{s}*S*, the independent variable. The denominator is the area of the "sloped distribution" region. This guarantees that as it should be for a density distribution function. C is a constant proportional to the probability density at S=0, α is proportional to the slope of the line.For the case under consideration, we have taken

*l*= 40, α = -0.0025,_{s}*c*= 0.1is the constant probability density across the note interval

*l*._{s} - Proximity
There is an additional constraint, influenced by cultural preferences, that has to do with the length of an interval between consecutive notes. Scholars in the field have found a statistical pattern by which the probability of an interval depends inversely on the length of the interval. In other words, the human ear likes successive notes to be not to distance apart in terms of their pitch. Proceeding like in the fractional sum case, the models for the probability density, P

_{Prox,Bias}(l), P_{Prox,noBias}(l), within the length of the interval of +- one octave (12 semitones) are shown in the following figure,Figure 3

The variance of the distribution (σ

^{2}) is 8.0 interval length. Data gathered and processed on over 6,000 songs (Essen Folksongs collection, see Ref. iii), resulted in a variance of the "proximity profile" (analogous to our interval length) of 7.2.

- Intervals and their integer ratios
- Combined fractional sum (FS) and proximity (PR) probabilities
We assume the initial prior probability (0.5, or 50%) to be same for the fractional sum and proximity cases and we adjust it turn by turn based on the last result.

The Bayes probability for FS is Eq. (5):

_{(8)}The Bayes probability for PR is:

_{(9)}IN(i) is the i-th interval between two consecutive notes, S

_{i}its fractional sum, l_{i}its length in semitones.We apply Bayes formula sequentially, musical interval by musical interval. At each step the Prior Probability P (A) is replaced by the newly found P(A given B).

**P (A given B) = (P (B given A ) * P (A)) / (P (B given A) * P (A) + P (B given not A) * P (not A))** - Repetition
Excessive repetition of a melodic line should, we think, be included as a factor affecting expectation and surprise. As a metric for repetition, we have used the autocorrelation, a well know statistical quantity of note succession. Given a sequence { a1, a2, …, a

_{n}}, the autocorrelation r_{k}measures if the separation of k notes is a repetitive pattern^{iv}:_{(10)}Where is the mean of the notes.

The r

_{k}'s can take any value between 0 and 1: 0 implies total lack of autocorrelation (extreme surprise), 1 total correlation (total bias). The a_{i}'s are the semitone progressive numbers in a piano scale. This simplistic interpretation may lend itself to gross misinformation about the intentions of the musical piece: repetition of a musical is not always a concession to auditory expectation. For instance Philip Glass' music is purposefully full of repetitions.After several studies of musical repetition and correlation, we have adopted the strategy.

In our experience, the autocorrelation values r fall into three regions:

**0 ≤**, region typical of totally uncorrelated (or random) sequences.*r*≤ 0.3**0.3 ≤**, region typical of most classical and popular melodies.*r*≤ 0.8**0.8 ≤**, region typical of where rhythmic aspect dominates.*r*≤ 1.0

- Procedure
The repetition pattern is analyzed ahead of the Bayes process of the fractional sum and proximity probabilities. The algorithm computes the autocorrelation: if it is less or equal to 0.8, it is displayed and no further action is taken. The Bayes' algorithm takes over.

If the autocorrelation is > 0.8, we found that the surprise algorithm is swamped by the repeated pattern, and we use a little artifice to damp it, best illustrated by the following example. Take the pattern P having a strong autocorrelation and lag k = 3:

**P = 1 2 3 1 2 3 2 3 4 2 3 4**

k = 3is replaced by the possible k=3 patterns, but without repetition. This process smooths the subsequent Bayes algorithm without appreciably affecting the final surprise.

- Probability and surprise
Proceeding as above, consecutive notes by consecutive notes, one obtains the progression of the bias probability until the final value, which summarizes the whole musical sequence.

A more easily definable way to measure "surprise" (the opposite of bias) is given by the formula (Ref. i)

*Surprise = - Log [2, P (bias)]*depicted by the following histogram for the Euridice melody:

_{Figure 4. Autocorrelation = 0.55, k(lag) = 1} - Surprise calibration
In order to calibrate the surprise we compare the output of our program in the following midi files:

- a) Euridice, as in Figure 4.
- b) Thirty randomly picked semi notes in a chromatic scale from a range of 2 octaves.
_{Figure 5. Autocorrelation = 0.30, k = 5} - c) Arpeggios, six notes repeated 5 times.
_{Figure 6. Autocorrelation = 0.90, k = 5} - d) Finally, a simple repetition of the same C4 note.
_{Figure 7}

- Calibration
The previous section gives us an indication on how the relative predictability of a melody compared to others.

Surprise Range Summarizing Comment Additional Comment 1 - 5 Strong compliance ^{a}No great excitement. Maybe in next line 5 - 10 Compliance Pleasing for a while 10 - 20 Mild surprise Pleasant. Maybe Goldilocks area 20 - 40 Unexpected development May take a while to appreciate 40 + Not a melody in a classical sense Perhaps in atonic music a) To auditory expectation.

- Loy, Gareth.
*Musimathics: The Mathematical Foundations of Music*. Cambridge: MIT, 2006. - Lopresto, Michael. "Experimenting with consonance and dissonance."
*Physics Education*44, no. 145 (2009).

10.1088/0031-9120/44/2/005 - Temperly, David,
*Music and Probability*, MIT Press, 2010. - Solanas, Manolov, Sierra "Lag-one autocorrelation in short series: Estimation and hypotheses testing"
*Psicologica*31. 357-381.