This project seeks to explore the ways in which the power of social media and social interactions-whether online or in the real world-can be harnessed to create digital music.
The core methodology of the project is to develop a mobile app, “echo-snap”, which will be used to study these interactions. Echo-snap will enable users to create a musical self- portrait or ‘musical selfie’, a musical representation of the user that reflects social experiences. In order to modify this composition, users must engage in different kinds of social interactions. These interactions might take place via social
media like Facebook or Instagram. For example, chatting with another user might result in one kind of musical outcome, while tagging friends in a photo might result in another kind of musical outcome. The composition can evolve only through these kinds of social interactions, as well as by undertaking real-world social activities with friends. The musical selfie logs these interactions via different kinds of compositional transformations. For example, the composition might grow longer or shorter in duration; it might add a new voice or voices; it may change timbral qualities, tempo, rhythms, harmonies, and so on. The sounds and music will be generated almost entirely via social activities. Users will be able to share their musical selfies with each other, and collaborate with other users in developing their compositions.