PIKSEL Festival 2022
<-- Tango for Us Two/Too -- > is a live coding performance that merges web-programming with the choreographic language of Tango. The script focus on the dialogical nature of Tango, using Google Translate with fragments of texts from interviews with Tango dancers and practitioners. It invites us to a pas-de-deux performed by the online interface and JavaScript functions which randomise search queries and present a series of (mis)translations. An algorithmic dance sustaining glitches between the techniques and poetics of Tango, each breath a step towards the emergence of a new vocabulary for the moving.
Digital Tools for Inclusive Art Experiences (IDLE) is an innovative artistic and participatory project based on a digitally updated art venue space, Studio 207, in Bergen.
The venue's audiovisual devices are controlled remotely through a virtual gallery. Artists and audiences can manipulate lights, videos, and sounds, to create different atmospheres through the Internet of Things technologies. The public designs spatial audiovisual experiences for those that are In Real Life at the venue and simultaneously in the virtual gallery!
IDLE intends to offer a creative virtual meeting point for school kids, youngsters, people with reduced mobility who wants to interact with the physical world, and all of those art curious lovers that want to look for new physical-virtual new experiences. The project explores new collaborations and forms of interaction between different art and cultural forms.
IDLE is an innovative project initiated by Piksel, in collaboration with CNDSD, Malitzin Cortés and Iván Abreu, APO33, Jenny Pickett, Julien Ottavi, and Romain Papion and Martin Koch. It is a 3 years project supported by the Municipality of Bergen and the Arts Council Norway.
PIKSELXX AI AI AI is presenting for the first time this experience to the world. To do the premiere in Bergen we have invited the artists and developers of the project CNDSD, Malitzin Cortés, Iván Abreu, APO33, Jenny Pickett, Julien Ottavi, and Romain Papion to create the first sound and visual, physical and virtual experience. Join us at Studio 207 and the @Piksel Cyber Salon on Thursday 21-22h.
Building Open Wave-Receivers enables DIY communications reception, and allows anyone to freely listen to the broad spectrum of radio waves around us. All you need are a few easy-to-procure supplies and, if you want to try it, a neighborhood fence or other receptive antenna proxy.
Why a fence? Antennas are necessary for radios to receive signals, and many things can be antennas. Fences can make great, and very long, antennas! Other materials can work well too; even a tent peg can become a useful part of a radio. Open Wave-Receivers allow us to explore the relationship between different combinations of materials, antennas, and radio waves, creating a new technology literacy, a new medium for artistic expression, and a new way to explore the airwaves in our communities.
We have found making Open Wave-Receivers to be a fun adventure. The ability to use simple scraps to create variety and personalization in each radio makes this a great maker project for anyone wanting to play with radio.
mimoidalnaube is a videogame piece for T-Stick (http://www-new.idmil.org/project/the-t-stick/), a Digital Musical Instrument (DMI). It uses the Sopranino version of the T-Stick, which is the smallest in this instrument family, and houses the following sensors: gyroscope, acceleromenter, magnetometer, piezo, as well as pressure and 12 touch sensors. It is an evolution of a DMI that has been in constant development for over a decade. This composition is a fruit of my participation in the second composers' workshop for T-Stick, led and supervised by the inventors of the instrument, [[https://josephmalloch.wordpress.com/][Joseph Malloch]] and [[http://dandrewstewart.ca/][D. Andrew Stewart]]. It was an exciting opportunity to incorporate a DMI into my current practice of comprovisation. In recent times, I have been using a video game approach as a vehicle to music comprovisation and performance. Today's game engines fit well my interest in physical modeling as a mediator in human-computer interaction, visual scores and visualization in the context of live musical performance. I use different techniques of game mechanics and interaction in order to shape the musical material. The visual composition serves both as a form of a score, which invites and guides physical gesture and, at the same time, conveys information about the state of the composition. The public is a witness to the audio-visual feedback between the performer and the work.
teaser video: https://vimeo.com/761318188
John Bowers solo - tba
PLEASURE FORCE
is a duo between Hamburg based bass guitar player and feminist performer Kris Kuldkepp and Berlin based sound artist and voice improviser Dr. Nexus.
Their performances are exploring the intersections of noise and experimental music, silence and loudness, visuality and materiality with a hint of pleasure.
TBA
DIWO interconnectivity and irrationality
The continuous implementation of AI and ML systems in all areas of technological artifacts, including art, is challenging the ways in which we understand the world around us and urge us to consider other-than-human entities and ‘objects’ as equally important as human beings. In an exploration of such philosophical ideas that stem from the realms of Posthumanism, Actor Network Theory and Object-Oriented Ontology, ‘Ventriloquist Ontology’ encompasses the creation of a modular wearable, trained using Natural Language Processing to create its own personality that manifests in the form of speech and movement actuation. It explores the limits of control and points of hybridization between the human and the machine through the relationship of a performer and a wearable entity. This ventriloquist modular soft entity speaks through text generated using a GPT-2 language model, trained on a dataset of texts around biopolitics, algo-governance, the surveillanced body, and queer theory. Inspired by Alejandro Jodorowsky's dystopian theatrical play The School of Ventriloquists, this project manifests the idea of ‘soft control’ through the creation of a wearable that takes over the wearer's body and converts them into a puppet whose movement is dictated by what they wear.
The softness aspect of this control refers to the plasticity of the interface, the malleability of its hardware connections (mainly soft silicones wires), the suggestive nature that the GPT-2 generated text pertains to, and the indicative nature of the movement of the actuators (some of them rather than radically moving the body, offer a suggestion as to how the body can follow their rhythm of actuation). Sequentially, it brings forth the soft data of the body inextricably linked to ideas of care and intimacy, as well as to the pliability of the different levels of interpretation between the human and the machine. The aspect of control is tied to the cybernetic idea of steering the body to its optimal movement through a feedback loop between machinic language, and human assimilation. It also deals with the hardness of the linear actuators and the microcontrollers that manipulate them, to the domination of these mechanical components over the softness and vulnerability of the human flesh. It asserts the supervision of the artist over the system, on the curation of the content of both the generated text, and its performative aspect. Ultimately, the idea of ventriloquism is used to give agency to an ontological entity comprised of subtle suggestive wearable modules, human flesh and cognitive motor abilities, born-digital, able to produce novel language, but also conditioned to reproduce the biases of its algorithmic parts.
MMM [Flourescent Markov Beat] is the first brunch of the MMM series. In a installation/concert format, MMM_FMB with a minimalist and reductionist approach, addresses the rhythmic question and the synesthesia between light and sound.
It consists of an a square array of LED light tubes that turn on and off following a sequence of states generated by a “markov chain model”. This stochastic and “bastard” model is created from the analysis of heterogeneous and diverse folk music rhythms sources. The sound also follows the sequences and it is generated by transduction and amplification of the light and accompanied by digital synthesis.
more info >>
https://noconventions.mobi/noish/hotglue/?MMM_FMB_eng
meta music machines, general
https://noconventions.mobi/noish/hotglue/?MMM_description_en/
This workshop aims to demystify some basic concepts that pertain to neural networks, and their potential in artistic practices. Focusing on Pure Data and the brand new neuralnet object, the participants will be introduced to basic use cases of neural networks in audio (and visuals possibly), while the workshop will end with a collective brainstorming session where participants will either try for themselves, or will share their ideas on how they would like to use a neural network for their own work.
Building Open Wave-Receivers enables DIY communications reception, and allows anyone to freely listen to the broad spectrum of radio waves around us. All you need are a few easy-to-procure supplies and, if you want to try it, a neighborhood fence or other receptive antenna proxy.
Why a fence? Antennas are necessary for radios to receive signals, and many things can be antennas. Fences can make great, and very long, antennas! Other materials can work well too; even a tent peg can become a useful part of a radio. Open Wave-Receivers allow us to explore the relationship between different combinations of materials, antennas, and radio waves, creating a new technology literacy, a new medium for artistic expression, and a new way to explore the airwaves in our communities.
We have found making Open Wave-Receivers to be a fun adventure. The ability to use simple scraps to create variety and personalization in each radio makes this a great maker project for anyone wanting to play with radio.
MTCD is a monologue in which the artist and researcher Teresa Dillon takes one "machine' from each year of her life. From radios to home recording devices to her first experiences on the Internet, reflections on techs uses and misuses, failures and breakdowns, highlight the glitchy realities and contextual relations in which the key "machines" that shaped her technological know-how and imagination, play out.
MTCD originally premiered at Berlin's transmediale in 2018 with further presentations in 2019. This updated but stripped back version is a special edition for PIKSEL 20th birthday.
screenBashing is a live coding piece, where audio and visual materials are programmed in real time during its performance using SuperCollider for it's sound components, and C for it's visual elements.
Visuals are created by printing characters such as backslashes and underlines in rapid succession, while at the same time freezing the whole system several times per second, creating the illusion of animated motion.
Audio is generated via an "oneliner", and there are no refined performance controls to it, making it impossible later on in the performance to tweak parameters of something that was already generated and is being heard. Any modification on the code will create new versions of the audio, where the only possible option is to sound in superposition to previous layers, creating an accumulation which drives the narrative forward. Layers can not be paused or removed after their creation. Mistakes are impossible to be undone, and all decisions are final.
One consequence of this setup is that it is extremely resource-heavy on the computer, as it purposely freezes the whole system several times per second in order to create an animation.
This unavoidable consequence - saturation of the machine processing power - is embraced as a principle/composition guideline, and is deeply explored throughout the performance, with the computer becoming gradually more unresponsive as new animations are spawned. After a certain threshold, the system becomes erratic, up to a point where it is no longer possible to gain control of it.
YupanaSimi
Yupana Simi is an interactive audiovisual work executed through code in a programming language expressed in a syntax inspired by the Quechua, one native language of AbyaYala, which processes sounds from the Andes and contemporary graphics of artisans. The performance is executed by Semilla y Muerte and ###, audiovisual projects of Milagros Saldarriaga and Marco Valdivia, creators from Perú.
Milagros Saldarriaga
A woman of the abya yala, from Lima, who finds in the southern highlands the possibility of expanding her experiences and trying to unlearn the thoughts of cement to breathe the blowing of the apus, drink the water of the clouds, listen to seeds, touch the earth, resent the sun, look at the thunder, as a vital necessity to resist the violence that reigns and in search of harmonizing with life. Just taking off....
Marco Valdivia
He believes in technology and its appropriation as a tool for the development of people, individually and above all common and collective, focuses it on sound and audiovisual practice, and on the communication and exchange of knowledge and experiences on these same topics. From this perspective, he has mediated training spaces, shared talks, and presented concerts and works in different festivals, meetings, cycles and other specialized spaces in Abya Yala and other territories.
Since 2006 he has been developing his work from asimtria.org, an open transdisciplinary platform, focused on researching, carrying out, transferring and sharing various forms of creation based on the use, appropriation and free development of technologies applied to contemporary experimental music, listening and audiovisual, through projects such as Pumpumyachkan, Festival Asimtria, Festival Transpiksel, REUDO - Encuentro de Ruido, and others. He has also collaborated with other organizations and networks in the Latin American region.
E-09 focuses on producing a live sound aesthetic composed of a collage of influences. While the live input of voice and electronic instruments is important, so are the frequencies of the radio bands and electromagnetic spectrum. Sources can include LW, SW and FM radio, where local electronic devices along with self-built mini FM transmitters interfere with normal reception. Power supplies feeding the devices used provide a color palette of EM spectrum tones to intuitively discover and manipulate. Other sources of electromagnetic sounds are the motors of the instruments used, such as the Dictaphone and the CD player picked up by a DIY broadband receiver for electromagnetic radiation. Other DIY instruments used are a digital circuit bending synthesizer and a circuit curve enhancer.
Once captured all these elements are filtered, dissected and then superimposed on themselves through repetition of patterns which in turn are destroyed - generating an organic, dynamic and progressive audible universe.
metacity self construction
AUTO{}Construction is an audiovisual concert of live coding and virtual reality in a video game environment. This act explores the relationship between speculative architecture and experimental electronic music, taking the phenomenon of informal housing executed by "non-architects" in countries like Mexico , United States, Latin America, Asia, India and some peripheries in Europe to create 3D imaginaries elaborated collaboratively with machine learning (machines trained to understand this phenomenon and reinterpret it), this audiovisual immersion is guided by a generative soundtrack that controls and gives life to this dystopian reality, a kind of music "self-constructed" by this score of inhabitable VR chaos.
TBA
robotcowboy is a wearable computing platform to explore new types of man-machine music & artistic performance. Embedded computing, custom open-source software, and audio electronics are utilized to build portable, self contained systems which both embed and embody the computation on the performer. This cyborg approach is both empowering and compromising as new sonic capability & movement are offset by the need for electrical energy: elements of tension between human and system. robotcowboy shows are always live and contain aspects of improvisation, feedback with the audience, and an inherent capability of failure.
robotcowboy's first 2006-2007 incarnation melded rock with realtime algorithmic composition tools into a dynamic live show. The second incarnation followed the story of the first human on Mars with spacesuit as portable music machine in 2013. The ongoing third incarnation explores themes of trajectories, radiation, and space travel. The future is bright, do you have room to wiggle?
Live coding party music by Servando Barreiro and Per-Olov Jernberg
Per & Servando are Audiovisual artists based in Stockholm where they often meet and collaborate in the local artist collectives.
Improvised Audiovisual collaboration
Tools used: Hydra, puredata, Nanoloop FM
https://possan.codes/
http://servando.teks.no
https://www.rumtiden.com/
https://www.blivande.com/
https://www.instagram.com/svartljus/
Shawn Lawson and Ryan Ross Smith will collaboratively live code a single text buffer from two remote locations to perform the audiovisual work. Lawson will live code visuals with the OpenGL Fragment shader and python in Touch Designer and Smith will live code audio with Tidal Cycles.
"Incidental Effects" is a three-part live coding performance.
OS: Debian
Sotware: ORCA, Carla, Surge-XT, QJAck
R is a free programming language for data analysis, which is defined as a multiparadigm: functional, vectorial, imperative, procedural, object-oriented. These characteristics enable the extended plasticity of its potential as a computational language, being used both for the development of scientific research based on data, and in artistic experimentation, among others.
Art as a transversal language leaves open its ability to subvert what is established. In this axiom, we propose to use R for the creation of an audiovisual artistic staging, through passages of algorithms and codes, using specific libraries for the generation of audio and images, which interact with other languages and programs for Live Coding such as Supercollider and Tidal Cycles.
This live presentation will take place as an audiovisual concert at Festival Piksel 20, and is part of the Rstart project (www.rstart.cl), which was born out of the concern to experiment with various multiparadigm free computer programs for transversal creation and production. between media arts, science and digital culture.
Strip & Embellish is a young experimental live sound project founded in 2022 by Graz-based computer music duo Daniele Pozzi and Hanns Holger Rutz. Both have developed specific, individual digital instruments based on the SuperCollider sound synthesis language which are strongly linked together by plugging each other’s sound signal into many nodes and entry points of the opposite system, creating essentially a complex non-linear feedback process. The project name derives from the fact that, on the one hand, Daniele’s continuous effort is to strip down a complex feedback driven system as much as possible while maximising its expressive richness. On the other hand, Hanns Holger creates a signal graph during the first part of the concert that is then repeated in the second part as an “empty structure” which is now newly navigated and embellished by the altered live input signals. This is mirrored by Daniele’s approach of finding “snapshot points” in the structure that may be recalled during the performance.
The Gesturewriter is a unique tool for composing and performing text. The underlying concept is to understand writing utensils as performative instruments, similar to musical instruments! The musical instrument theremin is known for its playability without physical contact. It can be controlled through the gestures of the left and right hand. A typewriter writes characters on a sheet of paper and is used as a writing medium to store text on paper. However, the Gesturewriter combines both the theremin’s gestural interaction and the typewriter’s storing ability. It forms an instrument that brings together performance and writing.
People commented that the Gesturewriter has several bugs and is difficult to use. In this performance lecture, I will prove that they are wrong.
The dark power change hands, it reveals itself with another face. We go towards a social uprising of unthought consequences helped by the uselessness of the political class. Short lived power because he who has it will keep it while he governs for the few groups that own this country.
Usurpation Rite is a sound action that uses the gestures and voice of those who supported the social struggle in Ecuador during 2019, turning them into acoustic and mechanical energy and bringing them to the present.