Form/Function
The last 2 weeks have revolved around the exploration of a new tool in response to the Realise & Project brief. During unit 2 I had already started thinking about exploring 3D modelling after seeing some examples of how it can be incorporated into a generative workflow so it seemed like the right time to jump in.
Speaking to a technician at the Digital media lab, the advice I received was first to get to grips with the basic concepts of working in 3D as trying to learn this within a coding environment using a platform like three.js would mean having to tackle several concepts and new terminologies at once. So I decided to start with Cinema 4D and familiarise myself with the medium. I spent the first week following different tutorials, creating works varying from morphing spheres to a 4-clawed crab, in an effort to just be able to learn to make. I noticed two things immediately - that C4D actually employs a lot of generative tools and aspects albeit hidden behind a UI of sliders, check boxes and input fields, and that some specific tools allow for control and integration of code in python. I tried to explore the latter through a couple of tutorials but it soon became clear that using these fields is only of real benefit for developing plugins and certain customisations, and I would need to learn a new programming language to even attempt it meaningfully. I spoke to another technician, looking for any guidance or tips on how to integrate C4D within a creative coding workflow, but sadly he had none to offer.
Geronimo, the 4-clawed crab
I spent much of the second week searching for a way to link this new medium to my previous work - using generative design and visual systems to interrogate digital culture. This proved (as usual) to be a very painful process. The issues I had included:
The subject of investigation is too wide and not well-defined.
C4D (to my knowledge) does not allow for integration of external data or the generation of artefacts that can be interacted with in a live environment. It only really allows for the generation of high-fidelity renders to a still image or a short animation.
Most of the concepts and ideas I had didn’t really make sense within the context of a 3D-rendering medium and seemed to move even further away from the tool in front of me.
I returned to some of the feedback I received during the tutorial session and noted how much of it revolved around two aspects - materiality/form and the fact that systems based on rules almost always involve some kind of power dynamic and opens the doors to bias. The idea of questioning the power dynamics within a system intrigued me - what could hacking a system through bending or breaking its rules reveal about its nature and the intention of its creators? In practice however trying to follow this train of thought proved very difficult because C4D is an extremely complicated system to comprehend, let alone push to its limits, and I’m a novice in this space.
I didn’t want to get stuck in a cycle of conceptualising without actually making anything - so I made stuff. By this what I mean is that I led with form. I created a small series of renders following a simple set of rules:
Start with the building block of a cube (in a couple of instances a sphere was used).
Apply some generative function to the cube. Basic parameters of that function could be altered for a more ‘interesting’ visual result.
Layer the same or a different generative function on to the structure without going beyond 3-4 layers of application.
Refrain from using colours, textures or materials. Only a plain backdrop and simple lighting were used.
The images that follow are the result of working according to these parameters.
These renders were created somewhat intuitively and arbitrarily, but reflecting on them I did note some things:
By working without a defined output in mind the process mimicked that employed in creative coding - an iterative process where I would tweak certain variables, generate a result, and repeat.
As engaging with the generative functions within C4D was the focus of my work and the model was only ‘finished’ once I decided that no more layers of complexity were afforded, the resulting forms are no more than an embodiment of those functions - a visual representation of arrays and other computational algorithms.
These structures could only reasonably be generated by a computer - they are algorithmic structures without a purpose - and embody something of the ‘aesthetic of the machine’.
They also loosely serve as a metaphor for the opaque nature of complex digital algorithms being developed today - neural networks such as Deep Dream and Google Brain. These renders are quite intricate, detailed, and complex to look at, however in reality they are all composed of a single building block with a few functions of code applied to them. Similarly, Google Brain has been developing modes of encryption that are generated by adversarial neural networks that are simply incomprehensible even to their own human developers. This is discussed in New Dark Age (a book I have already referenced several times), where Bridle goes on to say:
To [Isaac Asimov’s Three Laws of Robotics] we might add a fourth: a robot - or any other intelligent machine - must be able to explain itself to humans. Such a law must intervene before the others, because it takes the form not of an injunction to the other, but of an ethic. The fact that this law has - by our own design and inevitably - already been broken, leads inescapably to the conclusion that so will the others. We face a world, not in the future but right now, where we do not understand our own creations. The result of such opacity is always and inevitably violence.
The relationship between this work and Bridle’s writing is loose at best, however there is something interesting to me about this form of formal exploration within this context. Lastly, I attempted an iteration within a more specific context - predictive policing. I chanced upon a podcast episode dealing with a predictive policing model used by the LAPD that led to the shooting of a prominent black rapper and entrepreneur. I tried to create a sketch using a similar process incorporating some pre-rendered models I downloaded, leading to something that would be somewhat illustrative of the subject (similar to the approach I had used during the associate brief in dealing with bias in the AI community). The result was slightly underwhelming, made my macbook fan spin up with reckless abandon, and ultimately did not produce any new kind of information in isolation. At least I tried.
I also made an attempt at using one of the renders generated in the first round as material within an interactive poster layout within Processing. It was very much a draft test of the idea, and anything interesting about it has more to do with the interactions afforded by Processing than the formal qualities of the render. That said, combining both media in some way might be a basis for future work.
A render illustrating some aspects of predictive policing
A draft Processing sketch using material generated from the C4D renders
After week 2’s tutorial session I’m inclined to try and keep things simple - to focus on C4D as a medium and the kind of generative forms I can create by following a systemised workflow. I’m going to try and think about variables and constraints and what formal qualities they lead to. I’m also thinking about shifting the work into a particular context that overlaps with an aspect of digital culture or infrastructure in order to make the work more focused, yet that still allows me to work quickly, simply, and iteratively. Lastly, I also attempted a few more C4D tutorials in an effort to discover other generative tools and workflows that I might incorporate into work to come.