Creality - a Proposal for LIB 2024
Note: This is a demo of stable diffusion technology created by our collaborator
The visual composition for LIB 2024 would be an expanded version with added elements further xplained in this proposal
Thank you for the opportunity to submit a proposal for LIB 2024. This project brings together collaborative efforts of a multidisciplinary community of globally renowned scientists, technologists, philanthropists, and organizations, aiming to create transformative immersive experiences. We believe our dedication to advancing transformative, inspirational, and scientifically grounded experiences aligns beautifully with the values, ethos, and aesthetics of the LIB festival.
Our team is developing projects for venues such as the Playa Alchemy Pyramid at Burning Man Festival in 2024, planetariums, domes, and video projection venues around the world such as the Madison Square Company Sphere. The LIB festival would be our premiere platform for this iterative, interactive, multimedia project, showcasing impactful regenerative art.
For this endeavor, we are developing an innovative, interactive, AI-generated immersive experience slated for presentation at LIB, designed to invoke transformative states of awe among participants. I, Amanda Gregory, am a multimedia performer, opera singer, neuro music composer, and psychoacoustic sound artist blending opera with visual elements, am collaborating with an international cadre of experts in machine learning, biometric technologies, scientific research, and spatial sound engineering.
By harnessing the most advanced stable diffusion and gestural control technologies, we are crafting a 30-minute, time-lapsed voyage through the universe. Our goal is to kindle inspiration for a dynamically collaborative and regenerative future, spotlighting the profound potential we hold as creators in this world. This project would combine AI-generated music, AI-generated visuals, 3D sound, motion tracking, gesture control of synchronized sound and visual parameters, vocal cymatics and psychoacoustic, offering a unique, immersive experience that highlights the transformative power of AI in fostering deep, emotional, and cognitive engagements with art.
The visual journey begins with the Big Bang, swiftly transitioning into a time-lapse of galaxies forming, culminating in Earth's formation. The focus narrows to Earth, capturing its transition through time until the emergence of life, zooming into hot springs and the primordial microbial soup where life first sparked. The narrative then dives into a microscopic view, where colliding DNA organelles and proteins organize into a single cell, before zooming further into this cell.
From this microscopic perspective, the scene transitions into a time-lapse of evolutionary biology, tracing the lineage of human ancestry from a single cell through various organisms still residing in the ocean. The viewpoint shifts to a first-person perspective, following organisms as they venture onto land, evolving limbs and hands, then broadening to showcase the dynamic landscape of nature and the transformation of plant life over eons.
The narrative returns to the early mammals in human ancestry, presenting a panoramic view of nature's time-lapse across different global environments. A creative twist introduces a grid of windows, reminiscent of a Zoom meeting, each displaying a stable diffusion time-lapse of different evolutionary branches - reptiles branching into birds, mammals, and other species. These windows eventually merge into a cohesive scene depicting humans, animals, and nature evolving together across continents and environments through ancient history to the present.
The focus then zooms into a human cell, gradually zooming out to artistically visualize the electromagnetic activity at each scale, eventually revealing a community of organisms. This expands into a concentric circle of people engaged in a monkey chant, evolving into ecstatic dance with silent disco headphones. This visualization highlights the interconnected human biofields with the electromagnetic fields of the surrounding trees and atmosphere, observing the synergetic exchange of oxygen and carbon dioxide between trees and animals, including humans. Projecting potential futures of harmonious coexistence with nature, the narrative integrates advancements in synthetic biology and artificial intelligence, culminating in a reflection on the interconnectedness of all beings and a rapid montage of inspiring future world/reality depictions.
Zooming out further, the entire biosphere comes into view, showcasing the elemental dance of water, fire, earth, and air, drawing parallels between the systems of the human body and the biosphere. The scene then zooms out to the sun at the center, with orbiting planets visualized in spiraling geometric patterns and trails of movement, before expanding to showcase a nebula. The screen transitions rapidly between various nebulae, emphasizing the diversity and beauty of these celestial structures, highlighting the fractal parallels between nebulas and cells.
The narrative continues to zoom out, revealing galaxies in a manner akin to observing how cells form an organ system, eventually showing the universe and then the concept of the multiverse. The journey concludes by tying back to the concept of infinite Big Bangs, creating a cyclical narrative that connects the macrocosm of the universe to the microcosm of cellular life, and ultimately, to the ongoing cycles of creation and existence.
Artistic Vision and Scientific Inspiration
Audiovisualization of Neurobiology and Theories of Consciousness
Moreover, the narrative weaves in scientific theories on consciousness, such as Jonathan Schooler’s Nested Observer Window Theory, offering audiences a fractal journey from the minutiae of subatomic particles to the vastness of the multiverse, exploring the interconnected streams of consciousness. In partnership with visual artist and TouchDesigner programmer Scott Gregory, we employ specific gestures to modulate visuals that map elements and parameters of Schooler’s Three Dimensions of Time Theory. The accompanying soundscapes are designed to highlight consciousness "windows" across different scales, with tempo variations symbolizing the dynamic flux in subjective time experiences across diverse species. In later stages, the visuals present a tapestry of parallel potential realities, illustrating Schooler’s theory on alternative dimensions of time. Here is a link to explore his theory.
Science of the Imagination
The visual narrative will also depict how human imagination has intertwined with the material world throughout history, progressing into a sequence where a blossoming montage of thousands of AI-generated images envision a more beautiful world. These images are created from text prompts sourced in collaboration with the Buckminster Fuller Institute, Design Science Studio, and the Arthur C. Clarke Institute for the Imagination. This element of the project includes collaboration with Roxi Shohadaee, the director of the Design Science Studio; researcher Zoë Fowler; and Cassandra Vieten, Ph.D., a renowned expert in consciousness studies and founder of the Science of the Imagination conference. As the Director of Research at the Arthur C. Clarke Center for Human Imagination, and having been the CEO/President of the Institute of Noetic Sciences (IONS), Cassie has much experience merging scientific inquiry with technology, humanities, and the arts to explore the potential of imagination.
Similarly, Zoë Fowler has made significant contributions to research that aligns with our project's core motivations, especially regarding how the vivid visualizations of imagination can be materialized and enhanced through collaborative efforts. Zoë's pioneering research provides insights into how shared imagination can not only anticipate the future but also reinforce social connections in the present.
Recent collaborator Kyle Fiore Law, has conducted research involving eight studies with 9,570 US participants, revealing that self-oriented prospection—imagining oneself in the future—strengthens our sense of responsibility towards future generations. It demonstrates that seeing our present and future selves as connected (Future Self-Continuity) and considering how our actions impact our future (Consideration of Future Consequences) boosts our sense of responsibility, efficacy, and identification with future generations. Our collaboration aims to prime audiences to envision themselves in these future scenarios, adopting roles like congress members to foster behavior change towards long-term collective welfare. By integrating these insights, we aim to inspire actions that benefit future generations, leveraging the power of AI to create images that represent not only a more optimistic future but also motivate individuals to contribute to its realization.
Through collaboration with multiple organizations, we plan to amalgamate thousands of optimistic visions from around the world, creating AI-generated visualizations of a world that is harmonious, mutually rewarding, technologically advanced, sustainably biodiverse, exponentially regenerative, just and creative. We acknowledge that this endeavor is ambitious. Nonetheless, we are hopeful that the performance experience will not only be personally transformative and offer psychophysiological benefits, but will also enhance the potential for humanity's highest aspirational realities to unfold.
Technical and Creative Process
Audiovisual Composition Structure
Below is an example for various elements of this project. The live performance ResoVoir was showcased at Gray Area for the Buckminster Fuller Institute's 40th Anniversary in San Francisco. This performance merges NASA footage and my twin brother Scott Gregory's kaleidoscopic video art of patterns in nature with an immersive sonic landscape. The sound emulates the brain wave journey of a 24-hour day time-lapsed into 24 minutes. It exemplifies our approach to blending art with science, as seen in the therapeutic auditory exploration that engages neurostimulation protocols to soothe and stimulate the parasympathetic nervous system and enhance gamma brain wave activity.
The performance intricately layers voice-generated psychoacoustic effects, guiding participants through a curated sequence of scientifically backed frequencies, like the harmonious 528hz, and the natural songs from a variety of species. By embodying the biorhythms of life through sound and visual art, ResoVoir preludes the ambitious scale of our Creality project, where such principles will be expanded and woven into a narrative of cosmic evolution and imagination.
For Creality, we dive deeper into collaborator Jonathan Schooler and Tam Hunt's Vibrational Resonance Theory, with the added dimension of live, sonified biorhythms, including my heartbeat and breath patterns, modulating the music in real-time. This integration exemplifies the innovative spirit of our technical and creative process, where the line between performer and audience, biology and technology, becomes beautifully blurred. Our collaboration with experts like neuroscientist Tim Mullen and biometric designer and psychophysiologist Alan Macy makes this possible. Representing the music that each body creates in conjunction with the Schumann Resonance illustrates the profound interconnectedness of our physical existence with the larger cosmos.
*Listen with headphones for psychoacoustic effects
In the creation of Creality, our technical and creative methodologies are intricately woven together to generate an AI-driven video art and music experience that pushes the boundaries of immersive art. The core of our musical composition leverages data sonification, 3D sound spatialization, and psychoacoustic principles to induce experiences of time dilation, altered states, biorhythmic synchronization, beneficial EMDR effects, and a nuanced understanding of the mathematical patterns pervading the Universe at various scales. Our sound design is inspired by therapeutic neurostimulation protocols aimed at engaging the parasympathetic nervous system and enhancing sensorimotor rhythms. By utilizing scientifically validated frequencies, such as 528hz, and incorporating the vocalizations of diverse species, we invite participants into a state of deep meditation.
Collaborating with Alexis Crawshaw, a specialist in haptic vibration technology, we employ infrasonic technologies to generate profound somatic effects through low-frequency sound waves.
Further enhancing the experience, we are co-designing the infrasonic sound with Alan Macy, a pioneer in biometric technology engineering, to achieve optimal coordination with 3D sound and resonance with the autonomous nervous system. Our collaboration extends to Tim Mullen, a neuroscientist and machine-learning engineer, and Macy, to incorporate a wireless biosensor that translates biorhythms into sound, enriching the performance with live-generated psychoacoustic vocals and natural rhythms, all synchronized with the amplified Schumann Resonance.
AI Stable Diffusion Technology
Here is a Demo/Example of human-dreamed, AI-generated visuals by our collaborator Xander Steenbrugge, creator of ML software eden.art, of one of the stable diffusion technologies we are integrating for this project. Sample of technique below, though note that the visual storyline will be extensively expanded with additional scenes, and the live performance will include a variety of interactive and audio reactive elements.
My twin brother, Scott Gregory uses touch designer to run video footage through a series of effects that creates hypnotic visuals that are designed to synchronize with my music to induce lucid dream states in the listener. For Creality, we will have a moment here and there of zooming into various aspects of nature such as a water molecule, and then later the Sun where these artistic kaleidescopic visuals will help the viewers to grasp the fractal fibonacci spirals that are found in cymatics and various aspects of nature. For more info, read the description at the link below.
*Listen with headphones for psychoacoustic effects.
In partnership with 3D sound engineer Elan Rosenman, we delve into sound holography, creating immersive audio scenes that reflect natural and cosmic patterns, such as the Fibonacci sequence, toroidal fields, and the planetary orbits of our solar system.
Our team would like to work closely with the sound production team of the Compass stage, adapting to and enhancing the existing sound infrastructure to deliver a comprehensive 12-channel, ambisonic spatial audio sound system. This system is designed to encapsulate the audience in a multidimensional auditory experience, mirroring the music of the cosmos and the intrinsic resonances of the human body at all scales.
Below is an example of motion tracking gloves used for 4D spatial sound while performing at the Google IO conference in 2019. In the performance of Creality, our team would replace motion tracking gloves with cameras to detect motions and gestures to modulate the sound and visuals. We are excited to further explore artistic, technological, and scientific synergy while also actively involves the audience in the creative process, making them co-creators of the transformative journey.
Our team aims to sonify patterns of nature, including this movement of the Earth's toroidal field that creates the Schumann Resonance, a DNA helix, and the spiraling rotation of orbiting bodies in our solar system.
Examples below are of the motions that would be sonified and spatialized. Audiences would be at the center of each sound hologram of a 12-channel ambisonic sound system that we create in collaboration with the venue.
Live Interactive Elements
Interactive elements will be integrated, allowing for the synchronization of sound and vocal effects with the visuals, generating emergent geometric patterns through hand gestures and vocal harmonics.
"Oscilla is an audio-visual installation that allows the audience to interact with a waveform with their own voice through a microphone, and experience both the acoustic and visual results. The audience is encouraged by the visual feedback from the waveform and the audio feedback from the ring-modulation filter to produce more interesting results with their voice. With more experimenting, the audience can deduce certain patterns hidden in the algorithm of the visual pattern and gain control over them." - https://seehearmove.com/artworks/
For Creality, I collaborate with Scott Gregory who utilizes Touchdesigner to create interactive and audio reactive video installations. For the LIB performance, voice-generated cymatics would be designed to propogate during specific moments of the video composition, such as when zooming into fractal aspects of nature. Harmonics of my voice will interact with an array of effects parameters that are also programmed to modulate the various parameters of 3D geometry. As I vocalize, these parameters will modulate the harmonics of my voice in real-time, while also altering the visual patterns displayed.
In this performance with Thermal Gestures by Marco Pinter, I oscillate between using my voice and body to paint electromagnetic visualizations in this installation while modulating the live vocals, creating an improvised synaesthetic tapestry of thermal visuals and psychoacoustic sound. For the LIB performance, the mode of synchronization and ineraction between sound and visuals would be more automatic and simultaneous, but this is a good example of how the two modes can interplay for a synaesthetic experience.
Live Visual Propagation
Created in 2017, this demo is an artistic exploration with prolific artist Android Jones and Microdose VR. Though rather than using tilt brushes to generate-propagate streams of images, our team has the ability to use cameras to track movement and to detect specific gestures for creating streams of 3D, AI-generated objects and animations for interactive scenes during the live performance at LIB. For example, a scene near the end of the performance includes using the hands as a metaphor for free will and agency in cocreating reality, and the visuals will be pre-programmed to propogate various aspects of a world and landscape to emulate the way that our imagination folds into materialization of the external world.
Below is a sample intended to give the general sense of blending animated visuals with physical movement, though note that the sample is from from past online performances utilizing green screen and the technology used for LIB would implement SDXL turbo to motion track and project my avatar onto the large screen for specific scenes where visuals blend with the body to create a sense of first person perspective for audiences at large scales such as the solar system, a black hole, a nebula, the Universe, etc. SDXL Turbo is a real-time text-to-image generation model based on Stable Diffusion, along with a custom model that allows for embedding of AI-generated content into image frames obtained from a webcam. This will be used to transform Amanda into imagined forms generated by AI mirroring her body positions and movements. More info on this tech here: https://stability.ai/news/stability-ai-sdxl-turbo
AI-generated Video Style Transfer
AI coder and collaborator Xander Steenbrugge of www.eden.art has recently developed a cutting-edge technique of 'video style transfer' where you take an input video with a driving motion/shape and combine it with a texture image (or set of images) with a specific aesthetic style. You can apply that texture to the driving video. Our team would combine this technique with Stable Diffusion to create novel ways of detecting time-lapsed nature and evolutionary biology. Example below:
Closing and Invitation for Collaboration
With a sense of anticipation and a shared vision for transformative art, I am thrilled at the opportunity to integrate our project into the LIB festival’s celebrated lineup. This collaboration represents a confluence of art, science, and technology, aimed at cultivating immersive experiences that resonate deeply with audiences. The LIB festival, known for its pioneering spirit and commitment to regenerative art, offers the ideal moment for this project's debut, harmonizing with our goal to enlighten and connect.
I welcome the opportunity to more deeply discuss how we can tailor or expand our project to complement the unique ethos and needs of the festival. I am available for a meeting or call to explore this collaborative potential further, ensuring our project not only aligns with the festival's vision but also enriches the LIB experience.
Feel free to reach out to me through Roxi Shohadaee at email@example.com
And/or contact me directly at firstname.lastname@example.org
I would be happy to discuss how we can collectively create engaging and meaningful art. Thank you for considering our project for the LIB festival. I look forward to the prospect of our paths converging in this creative endeavor.
You can explore the bios of our collaborators with links to their websites listed in the section below
Amanda Gregory is an opera singer, immersive experience designer, multimedia performer, sound artist, composer, and researcher. Her career brings together art and science, exploring human potential and the nature of reality. With a background in both traditional and contemporary opera and knowledge in mathematical music, Gregory has developed a role in cross-disciplinary collaborations, leading to the creation of multisensory events.
She earned a Masters in Music from the Manhattan School of Music and works as a research associate at UCSB's META Lab. She is also involved with UCSB’s Molecular Biology Lab and participates as a remote resident at the Santa Barbara Center for Arts, Science, and Technology (SBCAST).
During her live performances, Gregory uses psychoacoustic effects such as binaural frequencies and EMDR. Her sound design, which intersects neurobiology and psychophysics, includes soundscapes inspired by nature and explores biomimicry, biorhythms, cymatic frequencies, synesthesia, and quantum potentialities. Gregory's projects often combine artistic data sonification with theoretical perspectives. Her work has been presented at various venues and events, such as the Contemporary Arts Museum (Houston), Lincoln Center, DWeb Camp, Google Launchpad, the Global Energy Conference, Sivananda Ashram Yoga Retreat, NeuroLeadership Summit, Oculus Rift VR, Obscura Digital, Diverse Intelligences Summer Institute, Design Science Studio, and the Google IO conference, where she premiered “Atlas of Emotions”, an immersive media project based on an online tool that was originally commissioned by the Dalai Lama and created by Drs. Paul & Eve Ekman to foster emotional awareness. She has also performed at SWSW, MAPS, Lightning in a Bottle, Adobe’s Festival of the Impossible, the Awakened Futures Summit, the Science of Consciousness Conference, the Unified Planet on Earth Day for 15,000+ listeners, and in venues throughout Ibiza, Spain. She recently performed the opening ceremony at the Buckminster Fuller Institute’s 40th anniversary event, the closing ceremony at the Neurotherapy Conference in Santa Barbara, the Synesthesia festival in Portugal, and the Buckminster Fuller Institute’s 40th Anniversary.
Gregory is currently developing immersive audio-visual experiences to foster “open mindfulness”, in collaboration with UCSB’s Meta Lab, the Center for Human Potential, and SBCAST, this project aims to expand beyond traditional boundaries and open new dimensions of awareness. Her work and its impact in arts, science, and technology have been detailed in a feature by the Santa Barbara Independent.
Jonathan Schooler, PhD, is a Distinguished Professor of Psychological and Brain Sciences at the University of California Santa Barbara, Director of UCSB’s Center for Mindfulness and Human Potential, and Acting Director of the Sage Center for the Study of the Mind. He received his Ph.D. from the University of Washington in 1987 and then joined the psychology faculty of the University of Pittsburgh. He moved to the University of British Columbia in 2004 as a Tier 1 Canada Research Chair in Social Cognitive Science and joined the faculty at UCSB in 2007. His research intersects philosophy and psychology, including the relationship between mindfulness and mind-wandering, theories of consciousness, the nature of creativity, and the impact of art on the mind. Jonathan is a fellow of several psychology societies and the recipient of numerous grants from the US and Canadian governments and private foundations. His research has been featured on television shows including BBC Horizon and Through the Wormhole with Morgan Freeman, as well as in print media including the New York Times, the New Yorker, and Nature Magazine. With over 250 publications and more than 40,000 citations he is a five time recipient of the Clarivate Analytics Web of Science™ Highly Cited Researcher Award and is ranked by Academicinfluence.com among the 100 most influential cognitive psychologists in history.
Roxi is the Ecosystem Director, Co-Founder and ARTchitect of the Design Science Studio, an educational cultural incubator for artists and designers founded to the build capacity of the creative community to propel the design science (r)Evolution and the Regenaissance. Artists in the Design Science Studio devote their creativity to facilitate equitable, just, regenerative futures through personal, cultural and planetary change. Roxi is also the Founder + CEO of habRitual: an experiential production, interdisciplinary design and immersive art studio creating for 100% of life. Learn more about her current and past initiatives here.
Roxi is a regenerative artivist, protopian futurist, ontological designer, experiential producer, transdisciplinary social sculptor and creative doula. She is a student of living systems, regenerative design, herbalism and decolonial sustainability. She has over 17 years of experience working at the intersection of art, science, experience and technology. Her quest is to harness this intersectional approach to catalyze social and systemic change through inclusive, transdisciplinary collaborations for the regeneration of our planet and culture. Some notable projects include being the supervising producer for the Planet Home Village, End of You at Gray Area, co-producing LMNL at Onedome, co-founding Wild Vessel and Reimagine: End of Life, creative producing Interactive Art for Lighting in a Bottle and Symbiosis Music Festivals and co-producing the Burning Man Global Leadership Conference, Desert Arts Preview and Artists' Symposium. She has taught at UC Irvine, California College of the Arts, Gray Area, RegenIntel, the Design Science Studio and more.
Her path is grounded in a commitment to creating inspiring and embodied ways of learning together. As behavior is a function of culture - she supports and develops creative cultural interventions and co-learning evolutionary containers. Her thesis is rooted in a belief that as we perpetuate equitable and just cultures, societal behaviors participating in life affirming ways of being will emerge in that process. As we are facing a crisis of imagination, Roxi’s efforts as an immersive artist, regenerative designer and educator encourage collective imagination beyond the limits of the plausible and probable into the possible in service to the potential of a world where all life thrives at the expense of none. She is a deep believer and practitioner in designing for states of being, having those states of being prime us for connection, reparations and regeneration with ourselves, each other and our beautiful living planet.
Elan Rosenman is a 3D audio engineer and experiential designer passionate about the power of sound as a tool for affecting human consciousness. Elan’s work explores the cross-pollination of ambisonic audio technology with sound healing modalities, biofeedback and the neuroscience of meditation. His experiential installations and programs for peak performance and stress management can be found used by Fortune 500 companies in the SF Bay Area and abroad.
Following a decade in the music industry and radio production, Elan completed his studies in psychoacoustics and sound therapy at the Institute of Sound and Consciousness in San Francisco in 2008. After further coursework in digital audio production and mentorship by Dolby engineers, Elan went on to co-found Envelop, a 3D-audio production platform, studio and performance space in San Francisco. Elan's current venture, AudioElixir develops mobile, modular sound meditation rooms curated with immersive media and bio-responsive programs for wellness and transpersonal development.
Other key roles in immersive audio have played valuable contributions among interactive immersive museums such as Onedome in SF and international touring fulldome shows such Mesmerica and Beautifica
Tim Mullen (BA, MS, PhD) is a neuroscientist, technologist, entrepreneur, and new media artist. He a founder of multiple technology companies, including Intheon which has pioneered a platform for AI-driven brain data analytics and neural interfacing. His scientific research over the last two decades on brain-computer interfaces (BCI) – at UC Berkeley, UCSD, Xerox PARC, and in collaboration with NASA, NSF, DARPA, ARL, and other organizations -- sits at the intersection of AI, neuroscience, and human-computer interaction. He has published over 50 scientific papers and book chapters, and his work has been featured on TED, BBC, Wired, Scientific American, National Geographic, and other media outlets. Tim also finds creative expression as a new media artist and musician. His interactive installations and performances, exhibited over the past fifteen years in North America and Europe, leverage emerging technologies to blur boundaries between mental and physical worlds. His work has explored themes of interpersonal resonance, emotional communication, ecopsychology and nature connectedness, causality and emergence in complex systems, and the concept of “audience as performer.” Tim is a Board Member of San Diego leading classical arts organization Mainly Mozart and founding Creative Director of its annual “Mozart & the Mind” festival exploring the impact of music on our brains, health, lives, and communities.
Cassandra Vieten, PhD is a professor, licensed clinical psychologist, mind-body medicine researcher, author, consultant, and internationally recognized workshop leader and public speaker. Her current research projects focus on establishing training guidelines for spiritual and religious competencies for mental health professionals; developing and delivering wellness programs for law enforcement agencies, officers and professional staff; developing virtual reality tools and experiences designed to induce perspective shifts that change people's worldviews; investigating the nature and potentials of imagination: and studying the therapeutic potential of psychedelics.
Cassi is a Clinical Professor in the Department of Family Medicine's Centers for Integrative Health at the University of California, San Diego, where she serves as the Director of the Center for Mindfulness. The CFM is one of the leading mindfulness centers in the country, offering courses in mindfulness to the general public, conducting research on mindfulness-based interventions (MBIs), incubating new MBIs for special populations and settings, and training and certifying professional mindfulness teachers.
She is also Director of Research at the Arthur C. Clarke Center for Human Imagination at UC San Diego. The Clarke Center advances understanding of the phenomenon of imagination and its practical applications. We research, enhance, and enact the gift of human imagination by bringing together the inventive power of science and technology, with the critical analysis of the humanities, and the expressive insight of the arts. And, we work to develop more effective ways of using imagination to cultivate public engagement with the big questions of our time, to improve education and learning, and to enhance the application of imagination in meeting humanity’s challenges.
Cassi is also co-founder and Clinical Psychology Director at the Psychedelics and Health Research Initiative at UCSD, where a flagship study focuses on psilocybin for phantom limb pain in patients with amputations.
She is Senior Advisor at the John W. Brick Mental Health Foundation, where she served as Executive Director from 2019-2023. Founded by Victor and Lynne Brick, in honor of Victor’s brother John who suffered from schizophrenia, the JWB Foundation funds and promotes empirical research on fitness, nutrition, and mind-body approaches to foster mental health, and to better prevent and treat mental illness.
Cassi is a Senior Fellow at the Institute of Noetic Sciences (IONS), founded by Apollo 14 Astronaut Edgar Mitchell, where she worked for 18 years. She served as CEO/President from 2013-2016 and President from 2016-2019. The mission of IONS is revealing the interconnected nature of reality through scientific exploration and personal discovery, creating a more just and thriving world. In addition to her contributions to the overall mission, vision, strategic direction, financial health, board and staff development, and activities of the organization, she headed up several initiatives including Mindful Motherhood, Living Deeply and the Transformation Project, and the Future of Meditation Research Project.
She is co-chair of the Board of Directors of Partners for Youth Empowerment, Vice-Chair of the Board of Directors of the Consciousness and Healing Initiative, and serves on the Board of the Virtual World Society.
.Xander Steenbrugge is an AI researcher, digital artist, public speaker, online educator and founder of the http://wzrd.ai digital media platform multifaceted professional who combines his background as a civil engineer with his passion for Machine Learning (ML) to make significant contributions to the field. His journey in ML began during his master's thesis, which focused on brain-computer interfaces, specifically on classifying brainwaves (EEG). This initial exploration ignited a fervent interest in ML, leading him to become a prominent ML consultant, public speaker, and content creator on his YouTube channel 'Arxiv Insights,' where he simplifies complex ML concepts for a wide audience.
Throughout his career as an ML consultant, Xander has successfully executed projects across various domains, including computer vision (with tasks such as object tracking, optical character recognition, and image classification) and natural language processing (encompassing chatbots, text classification, and more). His approach often involves leveraging open-source tools like TensorFlow and PyTorch, alongside compute resources available on the Google Cloud platform, to innovate and solve real-world problems.
In recent years, Xander has shifted his focus towards bridging the gap between academic research and practical applications. He earned a PhD in Deep Reinforcement Learning, with a research focus on applying novel algorithms to industrial robotics and process optimization. This pursuit underscores his commitment to not only understanding but also advancing the cutting edge of ML research and its applications in the industry.
In his current role as the head of applied ML research at ML6, Xander leads efforts to apply ML in transformative ways across various sectors. His work exemplifies the dynamic interplay between theoretical knowledge and practical application, aiming to drive innovation and efficiency through ML.
Additionally, Xander is one of the founders and creative minds behind eden.art, a platform that showcases his interests in the intersection of ML and art. This venture further highlights his versatile talents and his ability to merge technical expertise with creative expression, making ML accessible and engaging to a broader audience. Through eden.art, Xander continues to explore the boundaries of ML, pushing the limits of what's possible in the realm of digital art and beyond.
Alan Macy is currently the Research and Development Director, past President and a founder of BIOPAC Systems, Inc. He designs data collection and analysis systems, used by researchers in the life sciences, that help identify meaningful interpretations from signals produced by life processes. Trained in electrical engineering and physiology, with over 30 years of product development experience, he is currently focusing on psychophysiology, emotional and motivational state measurements, magnetic resonance imaging and augmented/virtual reality implementations. He presents in the areas of human-computer interfaces, electrophysiology, and telecommunications. His recent research and artistic efforts explore ideas of human nervous system extension and the associated impacts upon perception. As an applied science artist, he specializes in the creation of cybernated art, interactive sculpture and environments.
Alan Macy is currently focusing on psychophysiology, emotional and motivational state measurements, magnetic resonance imaging and augmented/virtual reality implementations. His recent research and artistic efforts explore ideas of human nervous system extension and the associated impacts upon perception. As an applied science artist, he specializes in the creation of cybernated art, interactive sculpture and environments.
Alexis Story Crawshaw, Ph.D., Ph.D. is a transdisciplinary composer, new media artist, vocalist, researcher, technologist, cognitive scientist, entrepreneur, pedagogue, and educator. She is a leading expert in somatic sound (and related spatial computing), infrasonic music, and multisensory XR. She has realized a variety of music-themed art installations and compositional projects across the US and France, particularly in the area of interactive somatic sound and XR, including several collaborations with the restaurant Barbareño as well as site-specific installations for the Fridman Gallery in New York; the Santa Barbara Center for Art, Science, and Technology; and the Maison des Sciences de l’Homme Paris Nord. She holds two doctorates: one from UCSB in Media Arts and Technology, and the other from Université Paris 8 in Music within “Aesthetics, Sciences and Technologies of Arts.” Her second dissertation introduces and outlines the subfield of somatic computer music. Recently, she helped co-launch the ASU California Center’s Haptics for Inclusion Lab and currently consults with the haptic chair company ShiftWave. She has lectured for digital arts courses at Cal Poly, San Luis Obispo and UCSB, and she has helped develop two original transdisciplinary pedagogical models at the university level. Currently, she is writing a new compositional handbook from transdisciplinary perspectives.
Tristen is a software artist exploring the frontiers of generative art and machine intelligence, with a diverse background in nano-social network design, orchestrating AI hackathons, and research in LLM Security.
In 2016 he founded the Machine Learning Society and grew its membership to over 20k+ computational scientists worldwide. Tristen has hosted dozens of conferences, hackathons, and panels discussions at the intersections of Artificial Intelligence, Biotechnology, Blockchain, Smart Cities, Behavioral Psychology and Ethics. In 2018 he launched the CO Network, a scientific network engineered to accelerate global innovations in Science, Technology and Culture.
Tristen has worked closely with ambitious Governments, Startups, Academic Institutions, Fortune 500 companies and thousands of scientists to act on solving the world's biggest social and technical challenges, and recently developed the project Computational Renaissance.
Currently, Tristen is dedicated to harnessing the power of semantic technologies and optimizing swarm intelligence, aiming to redefine the boundaries of digital expression and interaction. His work, a fusion of technology and artistry, invites audiences into an immersive exploration of the symbiosis between music, light, and machine intellect.