Hi there π, I'm Zulfadhli Mohamad, an ML Engineer working at Rotor Videos. I'm originally from Kuala Lumpur, Malaysia π²πΎ and based in Belfast, United Kingdom π¬π§.
I do mostly implementing Music Information Retrieval techniques to automate the creation of a Music Video. I sometimes build cool video effects that responds to music.
I received my PhD in Electronic Engineering at Queen Mary University of London in 2016 and my thesis is about "Estimating Performance Parameters from Electric Guitar Recordings", which is mostly about analysing electric guitar sounds πΈ.
Designed, maintained and deployed an Lyric-to-Audio alignment system for designers to use to create lyric videos. Below is a demo I rendered that shows the lyrics already automatically aligned to the audio without any adjustments.
More examples of those lyric videos can be found on Youtube: SongJam and Textsicher.
Using Media Intelligence to create interesting visuals for our Music Videos, Artwork Videos and Lyric Videos. Deployed a state-of-the-art Source Separation model and Drum Component Separation model. Created video effects and visualisers that respond to vocals, bass, kick drums and snare drums. Implemented and deployed a Music Segmentation model.
Below is a demo that I personally made e.g. the style, effects, dominant colour extraction & music analysis and is available for all users to try out with their own album artwork and songs! The background colours are the dominant colours extracted from the artwork with a bit tint applied. The speed of the movement is tied to the tempo of the song, while the background colours change from one to the other according to the segment of the song.
The next example is a Generative Music Video. The shapes respond to the kick, snare and hi-hat, as well as the vocals of the song.
Third example is an artwork video using the Depth Map of the image predicted by a large Convolutional network. Here is a blog post that I wrote about it link
I developed techniques to estimate the pickup and plucking positions of an electric guitar. Interestingly, the pickup selection and the guitar model can be identified just from the pickup position estimates. For example, the electric guitar played in the recording is a Gibson SG and the bridge pickup configuration is selected. As a result, I published 3 international conference papers and 1 journal paper, and I present this work at 4 international conferences.
Link to Github. This is the python implementation of my thesis. It extracts the pickup and plucking positions of an electric guitar.
The main motivation of this thesis is to explore several techniques for estimating electric guitar synthesis parameters to replicate the sound of popular guitarists. Many famous guitar players are recognisable by their distinctive electric guitar tone, and guitar enthusiasts would like to play or obtain their favourite guitaristβs sound on their own guitars. This thesis starts by exploring the possibilities of replicating a target guitar sound, given an input guitar signal, using a digital filter. A preliminary step is taken where a technique is proposed to transform the sound of a pickup into another on the same electric guitar. A least squares estimator is used to obtain the coefficients of a finite impulse response (FIR) filter to transform the sound. The technique yields good results which are supported by a listening test and a spectral distance measure showing that up to 99% of the difference between input and target signals is reduced. The robustness of the filters towards changes in repetitions, plucking positions, dynamics and fret positions are also discussed. A small increase in error was observed for different repetitions; moderate errors arose when the plucking position and dynamic were varied; and there were large errors when the training and test data comprised different notes (fret positions). Secondly, this thesis explored another possible way to replicate the sound of popular guitarists in order to overcome the limitations provided by the first approach. Instead of directly morphing one sound into another, replicating the sound with electric guitar synthesis provides flexibility that requires some parameters. Three approaches to estimate the pickup and plucking positions of an electric guitar are discussed in this thesis which are the Spectral Peaks (SP), Autocorrelation of Spectral Peaks (AC-SP) and Log-correlation of Spectral Peaks (LC-SP) methods. LC-SP produces the best results with faster computation, where the median absolute errors for pickup and plucking position estimates are 1.97 mm and 2.73 mm respectively using single pickup data and the errors increased slightly for mixed pickup data. LC-SP is also shown to be robust towards changes in plucking dynamics and fret positions, where the median absolute errors for pickup and plucking position estimates are less than 4 mm. The Polynomial Regression Spectral Flattening (PRSF) method is introduced to compensate the effects of guitar effects, amplifiers, loudspeakers and microphones. The accuracy of the estimates is then tested on several guitar signal chains, where the median absolute errors for pickup and plucking position estimates range from 2.04 mm to 7.83 mm and 2.98 mm to 27.81 mm respectively.
This paper proposes a technique that estimates the locations along the string of the plucking event and the magnetic pickup of an electric guitar based on the autocorrelation of the spectral peaks. To improve accuracy, a method is introduced to flatten the spectrum before applying the autocorrelation function to the spectral peaks. The minimum mean squared error between the autocorrelation of the observed data and the electric guitar model is found in order to estimate the model parameters. The accuracy of the algorithm is tested on various plucking positions on all open strings for each pickup configuration. The accuracy of the proposed method for various plucking dynamics and fret positions is also evaluated. The method yields accurate results: the average absolute errors of the pickup position and plucking point estimates for single pickups are 3.53 and 5.11 mm, respectively, and for mixed pickups are 8.47 and 9.95 mm, respectively. The model can reliably distinguish which pickup configuration is selected using the pickup position estimates. Moreover, the method is robust to changes in plucking dynamics and fret positions.
In this paper, we introduce an approach to estimate the pickup position and plucking point on an electric guitar for both single notes and chords recorded through an effects chain. We evaluate the accuracy of the method on direct input signals along with 7 different combinations of guitar amplifier, effects, loudspeaker cabinet and microphone. The autocorrelation of the spectral peaks of the electric guitar signal is calculated and the two minima that correspond to the locations of the pickup and plucking event are detected. In order to model the frequency response of the effects chain, we flatten the spectrum using polynomial regression. The errors decrease after applying the spectral flattening method. The median absolute error for each preset ranges from 2.10 mm to 7.26 mm for pickup position and 2.91 mm to 21.72 mm for plucking position estimates. For strummed chords, faster strums are more difficult to estimate but still yield accurate results, where the median absolute errors for pickup position estimates are less than 10 mm.
This paper describes a technique to estimate the plucking point and magnetic pickup location along the strings of an electric guitar from a recording of an isolated guitar tone. The estimated values are calculated by minimising the difference between the magnitude spectrum of the recorded tone and that of an electric guitar model based on an ideal string. The recorded tones that are used for the experiment consist of a direct input electric guitar played on all six open strings and played moderately loud (mezzo-forte). The technique is able to estimate pickup locations with 7.75β9.44 mm average absolute error and plucking points with 10.45β10.97 mm average absolute error for single and mixed pickups.
This paper describes a technique to transform the sound of an arbitrarily selected magnetic pickup into another pickup selection on the same electric guitar. This is a first step towards replicating an arbitrary electric guitar timbre in an audio recording using the signal from another guitar as input. We record 1458 individual notes from the pickups of a single guitar, varying the string, fret, plucking position, and dynamics of the tones in order to create a controlled dataset for training and testing our approach. Given an input signal and a target signal, a least squares estimator is used to obtain the coefficients of a finite impulse response (FIR) filter to match the desired magnetic pickup position. We use spectral difference to measure the error of the emulation, and test the effects of independent variables fret, dynamics, plucking position and repetition on the accuracy. A small reduction in accuracy was observed for different repetitions; moderate errors arose when the playing style (plucking position and dynamics) were varied; and there were large differences between output and target when the training and test data comprised different notes (fret positions). We explain results in terms of the acoustics of the vibrating strings.