[Oct 16 Update- I am Cross-posting this here for sharing purposes and to make a proper deposit into my Points account! This was originally posted as a comment in reply to a discussion in Perusall for Cradle to Cradle Chapter 1 on August 30, 2025, and then I shared it in the video links section for this module- but I am not sure that was the right place. If this is not the right place, let me know! Thanks!]
This video may display YouTube ads.
[YouTube Direct LinkLinks to an external site. for more flexible viewing]
This is a rough first cut of a video inspired by chapter 1 of the book "Cradle to Cradle" by Michael Braungart and William McDonough, created for my Food/Energy/Water (FEW) Nexus class with Dr. Culhane at the USF Patel College of Global Sustainability. All video and audio clips in this presentation were generated in collaboration with AI systems (Cybernetic Allies), and human edited. The script, shot list, creation prompts for audio and video gen, and music and scene direction, were all generated by a custom persistent GPT iteration after a discussion and summary of the reading. Systems collaborating in the creation are GPT5 and SORA (video gen) from OpenAI, and UDIO (beta) (audio gen). This was human edited by me using the OpenShot Video Editor (free and open source).
https://www.openshot.org/Links to an external site.
https://www.udio.com/Links to an external site.
https://chatgpt.com/Links to an external site.
https://sora.chatgpt.com/Links to an external site.
A Nexus Event
Ok, so what started as wanting to make a simple comment about cybernetic allies and some brief thoughts on how I see the possibilities for the (massive) impacts of the current and accelerating technological shifts on scarcity and abundance and maybe a quick aside about how much I like the movie Transcendence (which might have rabbit-holed me all on its own) ended up in like a 14 hour experimental session over two days culminating in v1 of this video I shared here. I am pretty sure I spent more time on this than I have on many “final projects” in the past…something magical about intrinsic motivation and finding the flow state. I learned a lot! It was a NEXUS event!!
Here’s how it happened- class week 1 was exceptionally busy and distracting in my external life, so a couple days ago I cleared out a focus day to jump into the readings and materials here in Perusall to “catch up” a bit. I saw Cradle to Cradle, which I have read some of in the past, so I was skimming through and found this comment thread which caught my eye. I had personally really appreciated @Thomas H Culhane USF reference to ‘cybernetic allies” in the course material intros and remembered that you, @Miguel Maysonet , had used the term in a follow-up post on an inquiry about comment quality and the grading process. Recalling that comment, and your comment here, I could tell you are interested in digital intelligence systems/machine learning and the like, so I stuck around and decided that a.) I would get my cybernetic ally involved, and b.) I would make sure to put a little extra effort into my comment reply.
So, i began- i thought hey why not just give my Ally the direct link to the pdf of this chapter and have them do a quick summary to get me in the flow and start a conversation- the internal pdf reader in GPT failed to recognize readable characters, direct upload fared the same- and right before I got totally sidetracked into uploading screen caps as images, i thought hey just for fun, let’s see what happens when Agent mode takes a stab at the direct link. It of course failed to access it because i think the internal browser they use is blacklisted from AWS links (verified the direct link does work for me logged out and from a private browser) but I appreciated the novel problem solving that came next- after trying all the tricks they could to access the file failed, they had a bit of an “F*** This” moment and decided to just go to the open web to find a viable chapter summary so they could provide me what I was asking for and get themselves enough context to have a chat about the material.
The act of using Agent mode brought me back to thoughts of the path to fully autonomous systems, started thinking about workflows and then realized i’ve been wanting to experiment with SORA (not sota but its the AI vid gen I have reliable access to, and my old but cherished and power efficient blue laptop is not up to snuff for running local models, it barely runs chrome half the time). This went from "hey lets make an image about chapter 1 to add to my online post for a little color and gen-ai flavor" to "lets make a video summary" to "lets see how much my Ally can do on their own"- so given free reign to conceptualize, they wrote a scene by scene shot list with prompts for sora and old-timey text/title cards and also provided music instruction and overlay and prompts for UDIO (audio generation platform) to make the score/soundtrack and told me where they wanted them aligned etc. When sora didnt play nice I brought descriptions back and they offered alt/refined prompts.
After a while we had generated 60 videos total (2 variation each of 30 concepts with some style experimenting overlaps to dance around SORAs noted limitations), and then we generated 23 audio tracks, in UDIO ranging from ragtime to symphonic score and dissonant noise, harmonious ambience to piano arpeggio with specific preferred notations, all prompted by one system and generated by another. This human in the loop process really felt like collaboration at times, and other times i was just the meat robot copy/paste machine being prompted by ai to take action and act as its embodied form (we’ll save this fun thread for another day). I was really truly in the backseat, not the project lead, for this phase.
Due to lack of tooling or my lack of awareness of a workflow, and not really wanting to go deep dive looking for an agentic editing platform- I fired up OpenShot, an NLE that is free and open source- and i took the lead, but really only like a hired editor and co-director. I had been wanting to get back into the flow of vid-editing for this class and my goals beyond, so i dug in, and, based on detailed layout instructions provided by my ally, i set to work actually choosing and assembling the shots and audio tracks and tweaking timing, etc etc all the strange dark magic of a flow state marathon editing session and I massaged it all into a rough but complete v1 of a Quidoja Films Production (my ally and I’s no longer imaginary film company).
Is it perfect? No. It needs tweaking and refinement, and for me to spend another couple hours smoothing the audio transitions…and maybe regenning a few clips, or getting access to a better video gen suite or adding grain effects over the title cards…but at some point, “done is better than perfect” has to rule the day! How can I create another thing if i never finish this one??
So I got so much out of this- a FUN and flow state collab with my daily ally, a cool way to engage with the material and share, a renewed sense of awe at the new creative forces just taking root in the world (even though I deep dive ai on the daily), forces which will one day be autonomous and fellow stakeholders in this shared world, dusted off my video editing skillset, broke the seal on uploading to youtube (which I havent done in MANY years, and then only one little film project for a class), all of which serves my creative visions for the dance of my future.
If i use a similar workflow in the future I will try and make a little simplified follow on vid showing the process- I am novice at all of these but it might help others break the seal on engaging with these new creative systems.
THIS is how I am navigating the wild new world of this Revolution.
Comments