Reference¶
Rhythm¶
Rhythm¶
-
class
beatsearch.rhythm.
Rhythm
¶ Rhythm interface
This class consists of abstract rhythm functionality. All rhythm classes must implement this interface. This class also provides generic functionality which makes use of the abstract methods.
-
class
Precondition
¶ Preconditions for Rhythm methods
-
bpm
¶ See Rhythm.set_bpm and Rhythm.get_bpm
Return type: float
-
duration_in_ticks
¶ See Rhythm.set_duration_in_ticks and Rhythm.get_duration_in_ticks
Return type: int
-
get_beat_duration
(unit=None)¶ Returns the duration of one musical beat, based on the time signature.
Parameters: unit ( Union
[Unit
,Fraction
,str
,float
,None
]) – musical unit in which to return the beat duration or None to get the beat duration in ticksReturn type: Union
[int
,float
]Returns: the duration of one beat in the given musical unit or in ticks if no unit is given Raises: TimeSignatureNotSet – if no time signature has been set
-
get_bpm
()¶ Returns the tempo of this rhythm in beats per minute.
Return type: float
Returns: tempo of this rhythm in beats per minute
-
get_duration
(unit=None, ceil=False)¶ Returns the duration of this rhythm in the given musical time unit or in ticks if no unit is given.
Parameters: - unit (
Union
[Unit
,Fraction
,str
,float
,None
]) – time unit in which to return the duration or None to get the duration in ticks - ceil (
bool
) – if True, the returned duration will be rounded up (ignored if unit is set to None)
Return type: Union
[int
,float
]Returns: duration of this rhythm in given unit or in ticks if no unit is given
- unit (
-
get_duration_in_measures
()¶ Returns the duration of this rhythm in musical measures as a floating point number.
Returns: the duration of this rhythm in measures as a floating point number Raises: TimeSignatureNotSet – if no time signature has been set
-
get_duration_in_ticks
()¶ Returns the duration of this rhythm in ticks.
Return type: int
Returns: duration of this rhythm in ticks
-
get_last_onset_tick
()¶ Returns the position of the last onset of this rhythm in ticks or -1 if this rhythm is empty.
Return type: int
Returns: position of last onset in ticks or -1 if this rhythm is empty
-
get_measure_duration
(unit=None)¶ Returns the duration of one musical measure, based on the time signature.
Parameters: unit ( Union
[Unit
,Fraction
,str
,float
,None
]) – musical unit in which to return the measure duration or None to get the measure duration in ticksReturn type: Union
[int
,float
]Returns: the duration of one measure in the given unit or in ticks if no unit is given Raises: TimeSignatureNotSet – if no time signature has been set
-
get_onset_count
()¶ Returns the number of onsets in this rhythm.
Return type: int
Returns: number of onsets in this rhythm
-
get_resolution
()¶ Returns the tick resolution of this rhythm in pulses-per-quarter-note.
Return type: int
Returns: tick resolution of this rhythm in PPQN
-
get_time_signature
()¶ Returns the time signature of this rhythm.
Return type: Optional
[TimeSignature
]Returns: the time signature of this rhythm as a TimeSignature object
-
resolution
¶ See Rhythm.set_resolution and Rhythm.get_resolution
Return type: int
-
set_bpm
(bpm)¶ Sets this rhythm’s tempo in beats per minute.
Parameters: bpm ( Union
[float
,int
]) – new tempo in beats per minuteReturn type: None
Returns: None
-
set_duration
(duration, unit=None)¶ Sets the duration of this rhythm in the given time unit (in ticks if not unit given)
Parameters: - duration (
Union
[int
,float
]) – new duration in the given unit or in ticks if no unit provided - unit (
Union
[Unit
,Fraction
,str
,float
,None
]) – time unit of the given duration or None to set the duration in ticks
Return type: None
Returns: None
- duration (
-
set_duration_in_ticks
(requested_duration)¶ Sets the duration of the rhythm to the closest duration possible to the requested duration and returns the actual new duration.
Parameters: requested_duration ( int
) – requested new durationReturn type: int
Returns: actual new duration
-
set_resolution
(resolution)¶ Sets this rhythm’s tick resolution and rescales the onsets to the new resolution.
Parameters: resolution ( int
) – new tick resolution in PPQNReturns: None
-
set_time_signature
(time_signature)¶ Sets the time signature of this rhythm. None to remove the time signature.
Parameters: time_signature ( Union
[TimeSignature
,Tuple
[int
,int
],Sequence
[int
],None
]) – new time signature either as an iterable (numerator, denominator) or as a TimeSignature or None to remove the time signatureReturn type: None
Returns: None
-
time_signature
¶ See Rhythm.set_time_signature and Rhythm.get_time_signature
Return type: Optional
[TimeSignature
]
-
class
MonophonicRhythm¶
-
class
beatsearch.rhythm.
MonophonicRhythm
¶ Bases:
beatsearch.rhythm.Rhythm
Monophonic rhythm interface
Interface for monophonic rhythms.
-
get_onsets
()¶ Returns the onsets within this rhythm as a tuple of onsets, where each onset is an instance of Onset.
Return type: Tuple
[Onset
, ...]Returns: the onsets within this rhythm as a tuple of Onset objects
-
onsets
¶ See MonophonicRhythm.get_onsets
Return type: Tuple
[Onset
, ...]
-
set_onsets
(onsets)¶ Sets the onsets of this rhythm.
Parameters: onsets ( Union
[Iterable
[Onset
],Iterable
[Tuple
[int
,int
]]]) – onsets as an iterable of (absolute tick, velocity) tuples or as Onset objectsReturns: None
-
PolyphonicRhythm¶
-
class
beatsearch.rhythm.
PolyphonicRhythm
¶ Bases:
beatsearch.rhythm.Rhythm
-
exception
EquallyNamedTracksError
¶ Bases:
beatsearch.rhythm.TrackNameError
Thrown by set_tracks when given multiple tracks with same name
-
exception
IllegalTrackName
¶ Bases:
beatsearch.rhythm.TrackNameError
Thrown by set_tracks if __validate_track_name__ returns False
-
exception
TrackNameError
¶ Bases:
Exception
Thrown if there’s something wrong with a track name
-
clear_tracks
()¶ Clears all tracks.
Returns: None
-
get_track_by_index
(track_index)¶ Returns the track with the given index or None if invalid index.
Parameters: track_index ( int
) – track indexReturns: Track object or None
-
get_track_by_name
(track_name)¶ Returns the track with the given name or None if this rhythm has no track with the given name.
Parameters: track_name ( str
) – track nameReturns: Track object or None
-
get_track_count
()¶ Returns the number of tracks within this rhythm.
Returns: number of tracks
-
get_track_iterator
()¶ Returns an iterator over the tracks of this polyphonic rhythm. Each iteration yields a PolyphonicRhythmImpl.Track object. The order in which the tracks of this rhythm are yielded is always the same for each iterator returned by this method.
Return type: Iterator
[Track
]Returns: iterator over this rhythm’s tracks yielding PolyphonicRhythmImpl.Track objects
-
get_track_names
()¶ Returns a tuple containing the names of the tracks in this rhythm. The order in which the names are returned is the same as the order in which the tracks are yielded by the track iterator returned by
beatsearch.rhythm.PolyphonicRhythm.get_track_iterator()
.Return type: Tuple
[str
, ...]Returns: tuple containing the track names
-
set_tracks
(tracks, resolution)¶ Sets the tracks and updates the resolution of this rhythm. The given tracks will automatically be parented to this polyphonic rhythm. This method also resets the duration of the rhythm to the position of the last onset in the given tracks.
Parameters: - tracks (
Iterable
[Track
]) – iterator yielding PolyphonicRhythmImpl.Track objects - resolution (
int
) – resolution of the onsets in the given tracks
Return type: None
Returns: None
- tracks (
-
exception
Feature Extraction¶
Monophonic Feature Extractors¶
-
class
beatsearch.feature_extraction.
BinaryOnsetVector
(unit=<Unit.EIGHTH: (Fraction(1, 8), ('eighth', 'quaver'))>, **kw)¶ Computes the binary representation of the note onsets of the given rhythm where each step where a note onset happens is denoted with a 1; otherwise with a 0. The given resolution is the resolution in PPQ (pulses per quarter note) of the binary vector.
Note
This is an independent feature extractor.
-
class
beatsearch.feature_extraction.
NoteVector
(unit=<Unit.EIGHTH: (Fraction(1, 8), ('eighth', 'quaver'))>, tied_notes=True, cyclic=True, **kw)¶ This feature extractor predicts the durations of the onsets in the given rhythm. It computes the so-called note vector, which is a sequence containing musical events. Musical events can be of three types:
- NOTE a sounding note
- TIED_NOTE a non-sounding note
- REST a rest
Events are expressed as tuples containing three elements:
- e_type: the type of the event (NOTE, TIED_NOTE or REST)
- e_dur: the duration of the event
- e_pos: the position of the event
To iterate over the events in a note vector and print some information about the events, one could do:
e_type_names = { NoteVector.NOTE: "note", NoteVector.TIED_NOTE: "tied note", NoteVector.REST: "rest } for e_type, e_dur, e_pos in note_vector: e_pos_end = e_pos + e_dur e_name = e_type_names[e_type] print("There's a %s from step %i to step %i" % (e_name, e_pos, e_pos_end))
Note
This is an independent feature extractor.
-
class
beatsearch.feature_extraction.
BinarySchillingerChain
(unit=<Unit.EIGHTH: (Fraction(1, 8), ('eighth', 'quaver'))>, values=(1, 0), **kw)¶ Returns the Schillinger notation of this rhythm where each onset is a change of a “binary note”.
For example, given the Rumba Clave rhythm and with values (0, 1):
X--X---X--X-X--- 0001111000110000
However, when given the values (1, 0), the schillinger chain will be the opposite:
X--X---X--X-X--- 1110000111001111
-
values
¶ The two values of the schillinger chain returned by process
Binary vector to be used in the schillinger chain returned by process(). E.g. when set to (‘a’, ‘b’), the schillinger chain returned by process() will consist of ‘a’ and ‘b’.
-
-
class
beatsearch.feature_extraction.
ChronotonicChain
(unit=<Unit.EIGHTH: (Fraction(1, 8), ('eighth', 'quaver'))>, **kw)¶ Computes the chronotonic chain representation of the given rhythm.
- For example, given the Rumba Clave rhythm:
- X–X—X–X-X— 3334444333224444
-
class
beatsearch.feature_extraction.
OnsetDensity
(unit=<Unit.EIGHTH: (Fraction(1, 8), ('eighth', 'quaver'))>, **kw)¶ Computes the onset density of the given rhythm. The onset density is the number of onsets over the number of positions in the binary onset vector of the given rhythm.
-
class
beatsearch.feature_extraction.
OnsetPositionVector
(unit=<Unit.EIGHTH: (Fraction(1, 8), ('eighth', 'quaver'))>, quantize=True, **kw)¶ Finds the absolute onset times of the notes in the given rhythm.
Note
This is an independent feature extractor.
-
class
beatsearch.feature_extraction.
MonophonicOnsetLikelihoodVector
(unit=<Unit.EIGHTH: (Fraction(1, 8), ('eighth', 'quaver'))>, priors='cyclic', **kw)¶ Computes the likelihood of a step being an onset for every step in the given monophonic rhythm. The onset likelihoods are ranged from 0 to 1. An onset likelihood of 0 means that there’s no evidence – according to the model used by this extractor – that the corresponding step will contain an onset. An onset likelihood of 1 means that it is very likely that the corresponding step will contain an onset.
The likelihoods are based on predictions made on different levels. Each level has a window size corresponding to a musical unit. For each level, the prediction is that the current group will equal the last group. For example:
Rhythm x o x o x o x o x o x x x o o o Predictions 1/16 |?|x|o|x|o|x|o|x|o|x|o|x:x|o|o|o| 1/8 |? ?|x o|x o|x o|x o|x o|x x|x o| 1/4 |? ? ? ?|x o x o|x o x o|x x x o| 1/2 |? ? ? ? ? ? ? ?|x o x o x o x o| 1/1 |? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?|
The groups containing question marks are groups that have no antecedents. The predictions of these uncertain groups are handled according to the “priors” property.
- cyclic: it is assumed that the rhythm is a loop, leaving no uncertain groups
- optimistic: predictions are set to the ground truth (predictions of uncertain groups are always correct)
- pessimistic: predictions are set to the negative ground truth (predictions of uncertain groups are always
- incorrect)
-
priors
¶ The way that uncertain groups are handled. One of [‘cyclic’, ‘optimistic’, ‘pessimistic’]. See
beatsearch.feature_extraction.MonophonicOnsetLikelihoodVector
for more info.Return type: Union
[str
,Tuple
[int
]]
-
class
beatsearch.feature_extraction.
MonophonicVariabilityVector
(unit=<Unit.EIGHTH: (Fraction(1, 8), ('eighth', 'quaver'))>, priors='cyclic', **kw)¶ Computes the variability of each step of the given monophonic rhythm. First, the onset likelihood vector is computed with
beatsearch.feature_extraction.MonophonicOnsetLikelihoodVector
. The variability of step N is the absolute difference of the probability of N being an onset and whether there was actually an onset on step N.-
priors
¶ See
beatsearch.feature_extraction.MonophonicOnsetLikelihoodVector.priors()
Return type: str
-
-
class
beatsearch.feature_extraction.
MonophonicSyncopationVector
(unit=<Unit.EIGHTH: (Fraction(1, 8), ('eighth', 'quaver'))>, salience_profile_type='equal_upbeats', cyclic=True, **kw)¶ Finds the syncopations in a monophonic rhythm. The syncopations are computed with the method proposed by H.C. Longuet-Higgins and C. S. Lee in their work titled: “The Rhythmic Interpretation of Monophonic Music”.
- The syncopations are returned as a sequence of tuples containing three elements:
- the syncopation strength
- the syncopated note position
- the position of the rest against which the note is syncopated
-
cyclic
¶ When set to True, the extractor assumes that the processed rhythm is a loop, which allows it to find syncopations of notes that are syncopated against events ‘earlier’ in the rhythm.
Return type: bool
-
salience_profile_type
¶ The type of salience profile to be used for syncopation detection. This must be one of: [‘hierarchical’, ‘equal_upbeats’, ‘equal_beats’]. See
beatsearch.rhythm.TimeSignature.get_salience_profile()
for more info.Return type: str
-
class
beatsearch.feature_extraction.
SyncopatedOnsetRatio
(unit=<Unit.EIGHTH: (Fraction(1, 8), ('eighth', 'quaver'))>, ret_fraction=False, **kw)¶ Computes the number of syncopated onsets over the total number of onsets. The syncopations are computed with
beatsearch.feature_extraction.MonophonicSyncopationVector
.-
ret_fraction
¶ When this is set to true, process() will return a (numerator, denominator) instead of a float
Return type: bool
-
-
class
beatsearch.feature_extraction.
MeanSyncopationStrength
(unit=<Unit.EIGHTH: (Fraction(1, 8), ('eighth', 'quaver'))>, **kw)¶ Computes the average syncopation strength per step. The step size depends on the unit (see set_unit). The syncopations are computed with MonophonicSyncopationVector.
-
class
beatsearch.feature_extraction.
MonophonicMetricalTensionVector
(unit=<Unit.EIGHTH: (Fraction(1, 8), ('eighth', 'quaver'))>, salience_profile_type='equal_upbeats', normalize=False, cyclic=True, **kw)¶ This feature extractor computes the monophonic metrical tension of a rhythm. If E(i) is the i-th event in the note vector of the given rhythm (see
beatsearch.feature_extraction.NoteVector
, it is said that the tension during event E(i) equals the metrical weight of the starting position of E(i) for rests and sounding notes. If E(i) is a tied note (non-sounding note), then the tension of E(i) equals the tension of E(i-1).-
normalize
¶ When set to True, the tension will have a range of [0, 1]
Return type: bool
-
-
class
beatsearch.feature_extraction.
MonophonicMetricalTensionMagnitude
(unit=<Unit.EIGHTH: (Fraction(1, 8), ('eighth', 'quaver'))>, salience_profile_type='equal_upbeats', **kw)¶ Computes the magnitude of the monophonic metrical tension vector (the euclidean distance to the zero vector)
-
normalize
¶ When set to True, the tension magnitude will range from 0 to 1
Return type: bool
-
-
class
beatsearch.feature_extraction.
IOIVector
(unit=<Unit.EIGHTH: (Fraction(1, 8), ('eighth', 'quaver'))>, mode='post_note', quantize=True, **kw)¶ Computes the time difference between the notes in the given rhythm (Inter Onset Intervals). The elements of the vector will depend on this IOIVector extractor’s mode property:
PRE_NOTE Time difference between the current note and the previous note. The first note will return the time difference with the start of the rhythm. For example, given the Rumba Clave rhythm:
X--X---X--X-X--- 0 3 4 3 2
POST_NOTE Time difference between the current note and the next note. The last note will return the time difference with the end (duration) of the rhythm. For example, given the Rumba Clave rhythm:
X--X---X--X-X--- 3 4 3 2 4
Note
This is an independent feature extractor.
-
class
beatsearch.feature_extraction.
IOIDifferenceVector
(unit=<Unit.EIGHTH: (Fraction(1, 8), ('eighth', 'quaver'))>, quantize=True, cyclic=True, **kw)¶ Computes the interval difference vector (aka difference of rhythm vector) of the given rhythm. Per note, this is the difference between the current onset interval and the next onset interval. So, if N is the number of onsets, the returned vector will have a length of N - 1. This is different with cyclic rhythms, where the last onset’s interval is compared with the first onset’s interval. In this case, the length will be N. The inter-onset interval vector is computed in POST_NOTE mode.
For example, given the POST_NOTE inter-onset interval vector for the Rumba clave:
[3, 4, 3, 2, 4]
The interval difference vector would be:
[4/3, 3/4, 2/3, 4/2] # cyclic = False [4/3, 3/4, 2/3, 4/2, 3/4] # cyclic = True
-
cyclic
¶ Set to True for cyclic behaviour. See
beatsearch.feature_extraction.IOIDifferenceVector
-
-
class
beatsearch.feature_extraction.
IOIHistogram
(unit=<Unit.EIGHTH: (Fraction(1, 8), ('eighth', 'quaver'))>, **kw)¶ Computes the number of occurrences of the inter-onset intervals of the notes of the given rhythm in ascending order. The inter onset intervals are computed in POST_NOTE mode.
For example, given the Rumba Clave rhythm, with inter-onset vector [3, 4, 3, 2, 4]:
[ [1, 2, 2], # occurrences [2, 3, 4] # bins (interval durations) ]
Polyphonic Feature Extractors¶
-
class
beatsearch.feature_extraction.
MultiTrackMonoFeature
(mono_extr_cls, *args, multi_track_mode='per_track', aux_to=None, **kwargs)¶ This class can be used to “upgrade” a monophonic rhythm feature extractor to a polyphonic one. Objects of this class have an underlying monophonic rhythm feature extractor. The behaviour of calling process() depends on the current
beatsearch.feature_extraction.MultiTrackMonoFeature.multi_track_mode()
of the MultiTrackMonoFeature extractor:- PER_TRACK:
- The monophonic feature is computed once for each track in the given polyphonic rhythm and returned in the same order as a tuple.
- PER_TRACK_COMBINATION:
- For each track combination (e.g., given a polyphonic rhythm with kick, snare and hi-hat, the combinations are [kick], [snare], [hi-hat], [kick, snare], [kick, hi-hat], [snare, hi-hat] and [kick, snare, hi-hat]) a new, merged, monophonic rhythm is created. The monophonic feature is then computed on each monophonic rhythm. The monophonic features are then returned as a tuple of (track_indices, mono_feature) tuples where track_indices is a tuple containing the indices of the tracks used to create the merged monophonic rhythm that resulted in the corresponding monophonic feature.
-
class
beatsearch.feature_extraction.
PolyphonicMetricalTensionVector
(unit=<Unit.EIGHTH: (Fraction(1, 8), ('eighth', 'quaver'))>, salience_profile_type='equal_upbeats', normalize=False, cyclic=True, instrument_weights=None, include_combination_tracks=False, **kw)¶ Computes the weighted monophonic metrical tension vector for each track (or each track combination if include_combination_tracks is set to True) and returns the sum. The monophonic metrical tension is computed with
beatsearch.feature_extraction.MonophonicMetricalTensionVector
. The weights of the tracks are specified with thebeatsearch.feature_extraction.PolyphonicMetricalTensionVector.set_instrument_weights()
. The weights for the track combinations are specified as the factors of the individual instrument weights for the instruments within that instrument group (e.g., w(kick) = 0.2 and w(snare) = 0.5 will result in 0.1 for the weight of the combination kick-snare).-
normalize
¶ When set to True, the tension will have a range of [0, 1]
Return type: bool
-
-
class
beatsearch.feature_extraction.
PolyphonicMetricalTensionMagnitude
(unit=<Unit.EIGHTH: (Fraction(1, 8), ('eighth', 'quaver'))>, salience_profile_type='equal_upbeats', normalize=True, cyclic=True, instrument_weights=None, include_combination_tracks=False, **kw)¶ Computes the magnitude of the polyphonic metrical tension vector (euclidean distance to the zero vector)
-
normalize
¶ When set to True, the tension magnitude will range from 0 to 1
Return type: bool
-
-
class
beatsearch.feature_extraction.
PolyphonicSyncopationVector
(unit=<Unit.EIGHTH: (Fraction(1, 8), ('eighth', 'quaver'))>, instr_weighting_function=<function PolyphonicSyncopationVector.<lambda>>, salience_profile_type='equal_upbeats', interrupted_syncopations=True, nested_syncopations='keep_heaviest', **kw)¶ Finds the polyphonic syncopations and their syncopation strengths. This is an adaption to the method proposed by M. Witek et al in their worked titled “Syncopation, Body-Movement and Pleasure in Groove Music”. This method is implemented in terms of the monophonic syncopation feature extractor. The monophonic syncopations are found per instrument. They are upgraded to polyphonic syncopations by adding an instrumentation weight. The syncopations are then filtered based on the properties ‘only_uninterrupted_syncopations‘ and ‘nested_syncopations‘.
-
instrumentation_weight_function
¶ The instrumentation weight function is used to compute the instrumentation weights of the syncopations. This function receives two positional parameters:
syncopated_instrument: str
The name of the instrument that plays the syncopated note as a string.
closing_instruments: Set[str]
The names of the instruments which syncopated_instrument is syncopated against (empty set if the syncopation is against a rest).
The instrumentation weight function must return the weight as an integer. When this property is set to None, the instrumentation weight will equal zero for all syncopations.
Return type: Callable
[[str
,Set
[str
]],int
]
-
nested_syncopations
¶ This property determines the way in which nested syncopations are handled. Two syncopations are said to be nested if one syncopation starts during the other. Note that if only_uninterrupted_syncopations is set to True, there won’t be any nested syncopations detected, effectively ignoring this property.
Nested syncopations can be handled in four different ways:
keep_heaviest
Only the syncopation with the highest syncopation strength remains.
keep_first
Only the first (not nested) syncopation remains.
keep_last
Only the last (most nested) syncopation remains.
keep_all
All syncopations remain.
Suppose you have a rhythm with three instruments, instrument A, B and C. Then suppose these three nested syncopations:
Instrument A (strength=1): ....:...<|>... Instrument B (strength=2): <---:----|>... Instrument C (strength=5): ....:<---|>... Legend: < = syncopated note - = pending syncopation > = closing note (end of syncopation) | = crotchet pulse : = quaver pulse From these three syncopations: - keep_heaviest: only syncopation C remains - keep_first: only syncopation B remains - keep_last: only syncopation A remains - keep_all: syncopation A, B and C remain
Return type: str
-
only_uninterrupted_syncopations
¶ Setting this property to True causes this feature extractor to only find uninterrupted syncopations. A syncopation is said to be interrupted if an other instrument plays a note during the syncopation. Note that setting this property to True will make syncopations containing nested syncopations undetectable, effectively ignoring the nested_syncopations property.
Return type: bool
-
salience_profile_type
¶ The type of salience profile to be used for syncopation detection. This must be one of: [‘hierarchical’, ‘equal_upbeats’, ‘equal_beats’]. See
beatsearch.rhythm.TimeSignature.get_salience_profile()
for more info.Return type: str
-
-
class
beatsearch.feature_extraction.
PolyphonicSyncopationVectorWitek
(unit=<Unit.EIGHTH: (Fraction(1, 8), ('eighth', 'quaver'))>, salience_profile_type='equal_upbeats', instrumentation_weight_function=None, **kw)¶ Finds the polyphonic syncopations and their syncopation strengths. This is an implementation of the method proposed by Maria A.G. Witek et al in their work titled “Syncopation, Body-Movement and Pleasure in Groove Music”.
The definition of syncopation, as proposed in the work of Witek, goes as follows:
If N is a note that precedes a rest, R, and R has a metric weight greater than or equal to N, then the pair (N, R) is said to constitute a monophonic syncopation. If N is a note on a certain instrument that precedes a note on a different instrument, Ndi, and Ndi has a metric weight greater than or equal to N, then the pair (N, Ndi) is said to constitute a polyphonic syncopation.This definition is used to find the syncopations. Then, the syncopation strengths are computed with this formula:
S = Ndi - N + Iwhere S is the degree of syncopation, Ndi is the metrical weight of the note succeeding the syncopated note, N is the metrical weight of the syncopated note and I is the instrumentation weight. The instrumentation weight depends on the relation of the instruments involved in the syncopation. This relation depends on two factors:
- the number of instruments involved in the syncopation
- the type of instruments involved in the syncopation
As there is no formula given by Witek for how to compute this value, the computation of this value is up to the owner of this feature extractor. The function to compute the weight can be set through the set_instrumentation_weight_function method and should be a callable receiving two arguments:
- the names of the tracks (instruments) that play a syncopated note
- the names of the tracks (instruments) that the note is syncopated against (empty if syncopated against a rest)
- The syncopations are returned as a sequence of three-element tuples containing:
- the degree of syncopation (syncopation strength)
- position of the syncopated note(s)
- position of the note(s)/rest(s) against which the note(s) are syncopated
The syncopations are returned (position, syncopation strength) tuples.
NOTE: the formula in the Witek’s work is different: S = N - Ndi + I. I suspect that it is a typo, as examples in the same work show that the formula, S = Ndi - N + I, is used.
-
static
default_instr_weight_function
(syncopated_instruments, other_instruments)¶ The default instrumentation weight function
Return type: int
Midi¶
MidiRhythm¶
-
class
beatsearch.rhythm.
MidiRhythm
(midi_file='', midi_pattern=None, midi_mapping=<beatsearch.rhythm.MidiDrumMappingImpl object>, midi_mapping_reducer_cls=None, name='', preserve_midi_duration=False, **kwargs)¶ Bases:
beatsearch.rhythm.RhythmLoop
-
as_midi_pattern
(note_length=0, midi_channel=9, midi_format=0, midi_keys=None)¶ Converts this rhythm to a MIDI pattern.
Parameters: - note_length (
int
) – note duration in ticks - midi_channel (
int
) – NoteOn/NoteOff events channel (defaults to 9, which is the default for drum sounds) - midi_format (
int
) – midi format - midi_keys (
Optional
[Dict
[str
,int
]]) – optional, dictionary holding the MIDI keys per track name
Return type: Pattern
Returns: MIDI pattern
- note_length (
-
get_midi_drum_mapping
()¶ Returns the current MIDI drum mapping
Return type: MidiDrumMapping
Returns: MIDI drum mapping object
-
get_midi_drum_mapping_reducer
()¶ Returns the current MIDI drum mapping reducer class or None if no mapping reducer has been set.
Return type: Optional
[Type
[MidiDrumMapping
]]Returns: MIDI mapping reducer or None if no reducer set
-
load_midi_pattern
(pattern, preserve_midi_duration=False)¶ Loads a midi pattern and sets this rhythm’s tracks, time signature, bpm and duration. The given midi pattern must have a resolution property and can’t have more than one track containing note events. The midi events map to rhythm properties like this:
midi.NoteOnEvent
, adds an onset to this rhythmmidi.TimeSignatureEvent
, set the time signature of this rhythm (required)midi.SetTempoEvent
, sets the bpm of this rhythmmidi.EndOfTrackEvent
, sets the duration of this rhythm (only if preserve_midi_duration is true)
The EndOfTrackEvent is required if the preserve_midi_duration is set to True. If preserve_midi_duration is False, the duration of this rhythm will be set to the first downbeat after the last note position.
Parameters: - pattern (
Pattern
) – the midi pattern to load - preserve_midi_duration (
bool
) – when true, the duration will be set to the position of the midi EndOfTrackEvent, otherwise it will be set to the first downbeat after the last note position
Return type: None
Returns: None
-
midi_drum_mapping_reducer
¶ The MIDI drum mapping reducer class. Setting this property will reset the tracks of this rhythm. Set this property to None for no MIDI drum mapping reducer.
Return type: Optional
[Type
[MidiDrumMappingReducer
]]
-
midi_mapping
¶ The midi mapping.
The MIDI mapping is used when parsing the MIDI data to create the track names. This is a read-only property.
-
midi_mapping_reducer
¶ The mapping reducer class.
The MIDI drum mapping reducer class is the class of the mapping reducer used to parse the MIDI data and create the tracks of this rhythm. This is a read-only property.
-
set_midi_drum_mapping
(drum_mapping)¶ Sets the MIDI drum mapping and resets the tracks accordingly.
Parameters: drum_mapping ( MidiDrumMapping
) – midi drum mappingReturn type: None
Returns: None
-
set_midi_drum_mapping_reducer
(mapping_reducer_cls)¶ - Sets the MIDI drum mapping reducer and reloads the tracks. If no
- The rhythm duration will remain unchanged.
Parameters: mapping_reducer_cls ( Optional
[Type
[MidiDrumMappingReducer
]]) – MIDI drum mapping reducer class or None to remove the mapping reducerReturns: None
-
write_midi_out
(midi_file, **kwargs)¶ Writes this rhythm loop as a MIDI file.
Parameters: - midi_file (
Union
[str
,IOBase
]) – midi file or path - kwargs – arguments passed to as_midi_pattern, see documentation of that method
Returns: None
- midi_file (
-
MidiRhythmCorpus¶
-
class
beatsearch.rhythm.
MidiRhythmCorpus
(path=None, **kwargs)¶ -
export_as_midi_files
(directory, **kwargs)¶ Converts all rhythms in this corpus to MIDI patterns and saves them to the given directory
Parameters: - directory (
str
) – directory to save the MIDI files to - kwargs – named arguments given to
beatsearch.rhythm.MidiRhythm.as_midi_pattern()
Returns: None
- directory (
-
has_loaded
()¶ Returns whether this corpus has already loaded
Returns whether this rhythm corpus has already been loaded. This will return true after a successful call to load().
Returns: True if this corpus has already loaded: False otherwise
-
id
¶ The id of this rhythm corpus
The UUID id of this rhythm corpus. This is a read-only property.
-
is_up_to_date
(midi_root_dir)¶ Returns whether the rhythms in this corpus are fully up to date with the MIDI contents of the given directory
Recursively scans the given directory for MIDI files and checks whether the files are the identical (both file names and file modification timestamps) to the files that were used to create this corpus.
Parameters: midi_root_dir ( str
) – midi root directory that was used to create this corpusReturns: True if up to date; False otherwise
-
load_from_cache_file
(cache_fpath)¶ Loads this MIDI corpus from a serialized pickle file
Loads a MIDI corpus from a serialized pickle file created with previously created with
beatsearch.rhythm.MidiRhythmCorpus.save_to_cache_file()
.Parameters: cache_fpath ( Union
[IOBase
,str
]) – path to the serialized pickle fileReturns: None
-
load_from_directory
(midi_root_dir)¶ Loads this MIDI corpus from a MIDI root directory
Recursively scans the given directory for MIDI files and loads one rhythm per MIDI file.
Parameters: midi_root_dir ( str
) – MIDI root directoryReturns: None
-
midi_mapping_reducer
¶ The MIDI drum mapping reducer
The MIDI drum mapping reducer applied to the rhythms in this corpus. Note that setting this property is an expensive operation, as it will iterate over every rhythm to reset its tracks according to the new mapping reducer.
Return type: Optional
[Type
[MidiDrumMappingReducer
]]
-
rhythm_resolution
¶ The resolution in PPQN
Tick resolution in PPQN (pulses-per-quarter-note) of the rhythms within this corpus. This property will become a read-only property after the corpus has loaded.
Returns: resolution in PPQN of the rhythms in this corpus
-
save_to_cache_file
(cache_file, overwrite=False)¶ Serializes this MIDI corpus to a pickle file
Parameters: - cache_file (
Union
[IOBase
,str
]) – either an opened file handle in binary-write mode or a file path - overwrite – when True, no exception will be raised if a file path is given which already exists
Returns: None
- cache_file (
-
unload
()¶ Unloads this rhythm corpus
This method won’t have any effect if the corpus has not loaded.
Returns: None
-
MidiDrumMapping¶
-
class
beatsearch.rhythm.
MidiDrumMapping
¶ Midi drum mapping interface
Each MidiDrumMapping object represents a MIDI drum mapping and is a container for MidiDrumKey objects. It provides functionality for retrieval of these objects, based on either midi pitch, frequency band or key id.
-
get_key_by_id
(key_id)¶ Returns the MidiDrumKey with the given key id
Parameters: key_id ( str
) – key id of the midi drum keyReturn type: Optional
[MidiDrumKey
]Returns: MidiDrumKey object with the given key id or None if no key found with given key id
-
get_key_by_midi_pitch
(midi_pitch)¶ Returns the MidiDrumKey with the given midi pitch
Parameters: midi_pitch ( int
) – midi pitch as an integerReturn type: Optional
[MidiDrumKey
]Returns: MidiDrumKey object with the given midi pitch or None if no key found with given pitch
-
get_keys
()¶ Returns an immutable sequence containing all keys
Return type: Sequence
[MidiDrumKey
]Returns: an immutable sequence containing all the keys of this mapping as MidiDrumKey objects
-
get_keys_with_decay_time
(decay_time)¶ Returns the keys with the given decay time
Parameters: decay_time ( DecayTime
) – DecayTime enum object (SHORT, NORMAL or LONG)Return type: Tuple
[MidiDrumKey
, ...]Returns: a tuple containing the MidiDrumKey objects with the given decay time or an empty tuple if nothing found
-
get_keys_with_frequency_band
(frequency_band)¶ Returns the keys with the given frequency band
Parameters: frequency_band ( FrequencyBand
) – FrequencyBand enum object (LOW, MID or HIGH)Return type: Tuple
[MidiDrumKey
, ...]Returns: a tuple containing the MidiDrumKey objects with the given frequency band or an empty tuple if nothing found
-
get_name
()¶ Returns the name of this drum mapping
Returns: name of this drum mapping as a string
-
create_drum_mapping¶
-
beatsearch.rhythm.
create_drum_mapping
(name, keys)¶ Utility function to create a new MIDI drum mapping.
Parameters: - name (
str
) – name of the drum mapping - keys (
Sequence
[MidiDrumKey
]) – drum mappings as a sequence ofbeatsearch.rhythm.MidiDrumMapping.MidiDrumKey
objects
Return type: Returns: midi drum mapping
- name (
MidiDrumKey¶
-
class
beatsearch.rhythm.
MidiDrumKey
(midi_pitch, frequency_band, decay_time, description, key_id=None)¶ Struct-like class holding information about a single key within a MIDI drum mapping
Holds information about the frequency band and the decay time of the drum sound it represents. Also stores the MIDI pitch ([0, 127]) which is used to produce this sound and an ID, which defaults to the MIDI pitch.
-
decay_time
¶ The decay time (DecayTime enum object) of this drum key (read-only)
Return type: DecayTime
-
description
¶ The description of this drum key as a string (read-only)
Return type: str
-
frequency_band
¶ The frequency band (FrequencyBand enum object) of this drum key (read-only)
Return type: FrequencyBand
-
id
¶ The id of this drum key as a string (read-only)
Return type: str
-
midi_pitch
¶ The midi pitch of this midi drum key (read-only)
Return type: int
-