Home / Articles
EXPLORING THE USE OF AI TO CREATE MUSIC INSPIRED BY NATURE SOUND | |
Author Name Haritha J, Hareharan P K, Kanishka J, Deepak S R Abstract This paper presents a deep learning approach to generate music inspired by natural sounds such as rivers, rain, and forests, using a combination of Recurrent Neural Networks (RNNs) and Convolutional Neural Networks (CNNs). Unlike previous works that relied on Generative Adversarial Networks (GANs), this is a model that uses RNNs for handling sequential data and CNNs to extract spatial features from audio. Audio data is preprocessed using librosa to extract spectrograms and Mel-frequency cepstral coefficients (MFCCs). This model is trained on a dataset of natural sound recordings, and the generated outputs are assessed based on their harmonic resemblance to nature's acoustics. Further, dominant frequency-based sound merging is used to enhance the auditory realism of the generated compositions. The paper also introduces a new method for audio visualization, aimed at better understanding the generated results. This model demonstrates that AI can successfully produce music that captures the essence of natural sounds, presenting new possibilities for sound design in media, ambient music, and environmental simulations.
Keywords- AI-generated music, nature sounds, RNN, CNN, deep learning, audio processing, sound synthesis, librosa, sound frequency merging. Published On : 2024-12-11 Article Download : |