Exploratory Practice – Personal Project
1. Introduction
Project Title: Who Should Be Disposable?
This project explores the environmental impact of disposable film cameras. The central concept transforms discarded cameras into a monster, serving as a metaphor for wastefulness. The symbolism extends further: just as cameras are treated as disposable, unchecked AI dominance could one day render humans similarly expendable.
My goal was both technical – practising live-action and CG integration – and conceptual, using VFX as a medium for illusion and critical commentary.
Final video:
Pipeline at a glance:
- Green screen shoot using a Blackmagic camera
- Motion capture for animation
- Real footage filmed at the Royal Palace of Madrid, Spain, with DJI Pocket 3.
- Blender for scene modelling and animation
- 3DEqualizer for tracking
- Nuke for compositing
- DaVinci Resolve for colour management, effects editing and colour grading
- CapCut for final editing
VFX Breakdown Video:
2. Initial Research & Concept Development
Disposable cameras are increasingly popular on social media, yet their environmental impact is rarely considered. While manufacturers claim they are recyclable, the process is far more complex than suggested, and most are simply discarded as general waste.


Initially, I planned to highlight the challenges of recycling. However, as the project evolved, I recognised a broader link: just as disposable cameras are treated as worthless, humans could face a similar fate under unchecked AI control.
This led me to develop a more symbolic narrative:
- A lens opens to reveal a character using a disposable camera.
- Once the film has finished, the strip lifts the camera out of a ripple, shifting the scene to a dungeon.
- The camera dissolves into particles and reforms digitally, dragging the character into a virtual realm.
- In this environment of a code platform, disposable camera waste, and emptiness, the character becomes a hollow digital human – expressionless and machine-like.
- The sequence ends with the character’s perspective reduced to scrolling code, accompanied by a final, ironic question.
3. Pre-production & Asset Planning
I began with sketches, moodboards and animatics, refining several versions of the narrative. An asset list was created, covering both filming footage (camera props, real locations) and CG assets (camera models, textures, and particle simulations).


Filming was structured around one live-action plate and multiple CG sequences. Three green screen shots and two mocap sequences were captured at LCC. To increase flexibility, I filmed across Madrid, Seville, and Barcelona, later selecting the Royal Palace of Madrid as the primary location.


From a technical perspective, I designed the workflow to incorporate industry-standard ACES colour management, while also mirroring the practical approach of a small studio or individual production.

4. Production & Technical Exploration
Filming & Tracking:
- Green screen sequences: holding film strips, dropping a camera, walking. Two were ultimately used.
- Motion capture: two takes, with one integrated to maintain narrative flow.
- Real footage: three city plates shot; Madrid chosen for its suitability.
- Tracking: completed in 3DEqualizer, which I found more accurate and efficient than Blender or Nuke.

3D Modelling, Animation & Simulation:
- Disposable camera: modelled from scratch in Blender with detailed research into real components.



- Character: created with Character Creator 4, chosen for smoother integration with Blender over Unreal’s Metahuman. A Polycam scan of myself served as the base. The workflow for transferring the character models, including hair and clothes, from Character Creator to Blender is more efficient than the workflow for transferring MetaHuman from Unreal to Blender.

- Mocap retargeted in Blender, with rig adjustments for spine, hands, and facial bones.

- Environment: five scenes were designed – lens opening, ripple with camera and film strips, kaleidoscopic camera drop, virtual underground, and close-up eyes sequence.

• Lens opening: photos taken in Spain were inserted into a film strip, and green screen footage was composited into the lens.
• Camera drop: inspired by Granada architecture to create a kaleidoscopic effect, symbolising mass camera disposal.

• Virtual underground: animated strips, digital platforms, floating camera debris, and particle systems integrated into a digital reconstruction.

• Particle and fluid simulations, including ripple effects, camera and body dissolutions.


• Eye close-up: a custom iris and pupil created in Blender with Geometry Nodes, replacing the original model for more dynamic animation.
• Geometry Nodes controlled iris details, symbolising AI control and loss of humanity.


5. Colour Management & Compositing
To ensure consistency across devices and formats, I adopted an ACES workflow throughout.
- DaVinci Resolve: Converted Blackmagic RAW and DJI Pocket 3 footage D-Log M to ACEScg EXRs.
- Blender: Adjusted shader nodes (role-matte-paint for colour textures, role-data for non-colour) to prevent inaccuracies, exported as ACEScg EXRs.
- Nuke: Linear ACEScg project setup, compositing of real footage and CG assets.
- Final Grading in DaVinci: Balanced and matched all sequences, applied consistent grading, and refined transitions.



This process reconciled the colour differences between Blackmagic and DJI sources, while maintaining professional fidelity.
Specifically,
I used a Blackmagic camera, which I rented from the Kit Room at LCC, to film the green screen footage. The footage of the real city was filmed with the DJI Pocket 3. These two settings require colour consideration when performing different format and colour transformations. Braw files offer a lot of freedom to manipulate colour, but the real colour needs to be recovered first. The D-log M setting on the DJI Pocket 3 differs from standard D-log and requires a superior colour transformation method to combine different formats and colour spaces within a single project. Therefore, ACES was the best solution for this project.
In Blender, all the shader nodes of the image textures were adjusted to ‘role-data-paint’ and ‘role-data’ (for non-color images). Without these important steps, the textures would appear incorrect in both the viewport and the output.
The blended sequence—Madrid footage with CG strips and cameras—was composited in Nuke, requiring careful colour matching before final grading in DaVinci.
All sequences were assembled in DaVinci. After adjusting the colours for consistency across all shots, video transitions were added to each sequence, and visual effects were added to enhance the details in the final output.


6. Final Editing
Final video editing was carried out in CapCut, where sound design and final adjustments were added. This stage involved combining refined sequences and CG renders to create a cohesive narrative.

7. Critical Reflection
This project strengthened both my technical and creative practice.
Technically, I gained hands-on experience with an end-to-end VFX pipeline: filming, ACES colour management, tracking, green screen integration, modelling, mocap animation and compositing. I also confirmed that Character Creator 4 integrates more effectively with Blender than Unreal’s Metahuman, a useful insight for small-scale productions.
Creatively, I shifted from literal commentary (camera recycling) to symbolic critique, where disposable cameras became metaphors for digital disposability and human fragility in an AI-driven age.
The greatest challenge was ensuring smooth integration across different devices, formats and colour spaces, which required both technical precision and problem-solving.
Looking ahead, I aim to refine rendering efficiency and compositing control. For this project, I rendered EXRs without AOVs due to Blender’s long render times compared with Unreal. While efficient, this limited grading flexibility. In future projects, I will use EXR sequences with AOVs for greater control and explore Unreal’s rendering capabilities to accelerate workflows while expanding creative scope.