π¨ Important: Version 2.0 Update π¨
Everything Reactivity: Almost all nodes in this pack can be made to react to audio, MIDI, motion, time, color, depth, brightness, and more, allowing for incredibly dynamic and responsive workflows. If a node prefixed with FLEX, then this reactivity is central to its functionality.**
- πͺ Flex Features: Dynamic control over IPAdapters, Masks, Images, Videos, Audio, and more
- π΅ Audio & MIDI Processing: Separate instruments and create audio-reactive visuals
- π Particle Systems: Create mesmerizing, fluid-like effects
- π Optical Flow: Generate masks based on motion in videos
- π DEPTH Flow π: Use flex features to control depthflow nodes, adding parallax animations to your workflows!
- π€ AdvancedLivePortrait π€: Use flex features to control facial animation expressions!
- π¨ Advanced Controlnet π¨: Direct integration with ComfyUI-AdvancedControlnet!
- π AnimateDiff π: Direct integration with ComfyUI-AnimateDiff-Evolved!
This repository has been updated to Version 2.0! After careful consideration, I decided that a complete update was better than maintaining legacy support indefinitely. This new version brings significant improvements while maintaining all existing functionality. This update was done with user experience, extensibility, and functionality in mind.
-
License: This project is now licensed under the MIT License.
-
EVERYTHING reacts to EVERYTHING: Now you can modulate ALL parameters of ALL Flex nodes! Possibilities increased by multiple orders of magnitude.
-
Optional Feature Inputs: Feature inputs are now optional! This means these nodes double as a powerful suite for image, mask, and video manipulation even without reactivity!
-
More Intuitive: Redesigned with user experience in mind. Less noodles, more intuitive connections.
-
Help: Takes full advantage of ComfyUI's tooltip system.
-
Manual Feature Creation: New interface for drawing/creating features manually - far more powerful than it might seem!
-
Text as Features: New integration with Open AI Whisper allows text to be used as a feature source, with a fully modular trigger system
-
Enhanced External Integration: Deeper compatibility with external node packs
-
Image Improvements: Major improvements to FlexImage nodes. One might say they are more than useful now.
-
Mask Improvements: Major improvements to FlexMask nodes.
-
Performance Improvements: Major performance improvements in many nodes. More to follow.
-
Feature Modulation: More robust and feature-rich modulation system.
-
And much more!
Due to ComfyUI's workflow loading mechanism, existing workflows using these nodes may will break after updating. I did consider this carefully, as I have yet to introduce breaking changes to this node system, but this extensive update neccesitated a complete overhaul. There will not be a version 3. Rather, version 2 will be updated as needed. I have taken the time to update the most relevant example workflows to version 2.
If you need to run an older workflow, you can revert to the previous version of these nodes by using the Manager, or by running this command in your ComfyUI_RyanOnTheInside directory:
git checkout dab96492ac7d906368ac9c7a17cb0dbd670923d9
To return to the latest version later, use:
git checkout main
Examples showcasing various effects using particle emitters, vortices, and other node features
Getting started with the RyanOnTheInside node pack is easy:
- Install the node pack as described in the Installation section.
- Open ComfyUI and look for nodes prefixed with "RyanOnTheInside" in the node browser.
- Check out the example workflows on Civitai and tutorials on YouTube to see how different features can be used.
There are many example workflows in this repo, but for the most current, and with all attendent required assets, visit my Civitai profile: RyanOnTheInside Civitai Profile
For tutorials on these nodes and more, check out my YouTube channel. Production value low, information dense af: RyanOnTheInside YouTube Channel
For detailed information on each and every node, click the β icon present in the top-right corner of the node.
Particles are now reactive!
Depthflow compatible! Live Portrait Compatible!!
Dynamic control over various aspects of your workflow:
- Modulate IPAdapters, Masks, Images, and Particles based on extracted features
- Features include: Audio, MIDI, Motion, Proximity Depth, Color, Time, and more
- Create adaptive, responsive effects that evolve with your input data
- Multiple particle emitters with customizable settings
- Force fields (Gravity Wells and Vortices) for complex interactions
- Boundary-respecting particles and static body interactions
- Time-based particle modulation (size, speed, color)
- Separate audio into individual instrument tracks
- Extract features from audio and MIDI for visual effects
- Create audio-reactive animations and transformations
- Generate masks based on movement in video sequences
- Multiple optical flow algorithms available
- Create motion-reactive particle simulations
- You can do all of this 7000x with FlexMask nodes.
I'm thrilled to announce that external node packs are now compatible with the feature system! Here are some notable examples:
The Depthflow Nodes pack brings the power of parallax animations to ComfyUI, allowing you to turn 2D images into stunning 2.5D animations. What's even more exciting is that it's fully compatible with my feature system!
Key features of Depthflow Nodes:
- Create complex parallax animations from images and depth maps
- Various motion presets for quick setup
- Fine-grained control with individual motion components
By combining Depthflow Nodes with my feature system, you can create dynamic, responsive parallax animations that react to audio, MIDI, motion, and more. This collaboration opens up a world of creative possibilities for your ComfyUI workflows!
Check out the Depthflow Nodes repository for more information and installation instructions.
The AdvancedLivePortrait nodes bring powerful facial animation capabilities to ComfyUI, and now they're fully compatible with our feature system! This means you can create dynamic, responsive facial animations that react to audio, MIDI, motion, and more.
Key features when combined with our system:
- Control facial expressions using audio features
- Sync lip movements with speech or music
- Create dynamic emotional responses based on various inputs
- Modulate animation parameters in real-time
The AnimateDiff Evolved nodes bring powerful animation capabilities to ComfyUI. There is now direct integration with this node pack, and this integration will grow over time!
The Advanced Controlnet bring powerful granular control to ComfyUI. There is now direct integration with this node pack, and this integration will grow over time!
The Flex Features system allows for dynamic control over various aspects of your workflow by extracting and utilizing different types of features:
- Amplitude Envelope: Tracks the overall volume changes in the audio
- RMS Energy: Measures the average energy of the audio signal
- Spectral Centroid: Indicates the "center of mass" of the spectrum
- Onset Detection: Detects the beginning of musical notes or events
- Chroma Features: Represents the tonal content of the audio
- Velocity: Intensity of MIDI note presses
- Pitch: Musical note values
- Note On/Off: Timing of note starts and ends
- Duration: Length of individual notes
- Density: Number of active notes over time
- Pitchbend: Pitch modulation data
- Aftertouch: Pressure applied after initial note press
- Various CC (Control Change) data: Modulation, expression, sustain, etc.
- Mean Motion: Average movement across the frame
- Max Motion: Largest movement detected
- Motion Direction: Overall direction of movement
- Horizontal/Vertical Motion: Movement along specific axes
- Motion Complexity: Variation in movement across the frame
- Mean Depth: Average depth in the scene
- Depth Variance: Variation in depth values
- Depth Range: Difference between nearest and farthest points
- Gradient Magnitude: Rate of depth change
- Foreground/Midground/Background Ratios: Proportion of scene at different depths
- Dominant Color: Most prevalent color in the image
- Color Variance: Spread of colors used
- Saturation: Intensity of colors
- RGB Ratios: Proportion of red, green, and blue in the image
- Mean Brightness: Overall lightness of the image
- Brightness Variance: Spread of light and dark areas
- Brightness Histogram: Distribution of brightness levels
- Dark/Mid/Bright Ratios: Proportion of image at different brightness levels
- Smooth: Linear progression over time
- Accelerate: Increasing rate of change
- Pulse: Periodic oscillation
- Sawtooth: Rapid rise followed by sudden drop
- Bounce: Emulates a bouncing motion
- Speech-to-Text: Convert spoken words from audio into text features
- Transcription Timing: Sync features with specific words or phrases
- Confidence Scores: Use speech recognition confidence as a feature
- Language Detection: Create features based on detected languages
- Speaker Segments: Generate features from different speaker segments
- Sentiment Analysis: Extract emotional content from spoken words
- Temporal Alignment: Map text features to specific timestamps
These features can be used to control almost anything. IPAdapters, masks, images, video.... particle emitters (see below :D)... creating dynamic and responsive effects that adapt to the input data.
Create mesmerizing, fluid-like effects through advanced particle simulation:
- Multiple Emitters: Create complex particle flows with independent settings
- Customize spread, speed, size, color, and more for each emitter
- Force Fields: Add depth to your simulations
- Gravity Wells: Attract or repel particles
- Vortices: Create swirling, tornado-like effects
- Global Settings: Fine-tune the overall simulation
- Adjust gravity and wind for the entire particle space
- Boundary Interactions: Particles respect mask shapes and edges
- Static Bodies: Add obstacles and surfaces for particles to interact with
- Spring Joints: Create interconnected particle systems
- Time-based Modulation: Evolve particle properties over time
- Adjust size, speed, and color dynamically
These features allow for the creation of complex, dynamic particle effects that can be used to generate masks, animate elements, or create stunning visual effects.
Transform your visuals with the power of sound and musical data:
- Track Separation: Isolate vocals, drums, bass, and other instruments
- Feature Extraction: Analyze audio for amplitude, frequency, and tonal content
- Frequency Filtering: Target specific frequency ranges for processing
- Visualizations: Create complex audio-reactive visual effects
- Feature Extraction: Utilize note velocity, pitch, timing, and control data
- Real-time Input: Process live MIDI data for interactive visuals
- Sequencing: Create rhythmic visual patterns based on MIDI sequences
- Control Mapping: Use MIDI controllers to adjust visual parameters
These audio and MIDI processing capabilities enable the creation of music-driven animations, visualizations, and effects that respond dynamically to sound input.
Harness the power of motion to create stunning visual effects:
- Multiple Algorithms: Choose from various optical flow calculation methods
- Farneback: Dense optical flow estimation
- Lucas-Kanade: Sparse feature tracking
- Pyramidal Lucas-Kanade: Multi-scale feature tracking for larger motions
- Motion-based Masking: Generate masks that highlight areas of movement
- Flow Visualization: Create visual representations of motion in video
- Particle Interaction: Use optical flow data to influence particle systems
- Directional Effects: Apply effects based on the direction of detected motion
Optical flow analysis allows for the creation of dynamic, motion-responsive effects that can be used for masking, animation, or as input for other visual processes.
Install via the ComfyUI Manager by searching for RyanOnTheInside, or manually by...
- Navigate to your ComfyUI's
custom_nodes
directory - Clone the repository:
git clone https://github.com/ryanontheinside/ComfyUI_RyanOnTheInside.git
- Navigate to the cloned directory:
cd ComfyUI_RyanOnTheInside
- Install the required dependencies:
pip install -r requirements.txt
- Restart ComfyUI if it's currently running and refresh your browser
See requirements.txt
for a list of dependencies.
*Credit to https://github.com/alanhuang67/ComfyUI-FAI-Node for Voronoi implementation
Contributions are welcome! Both to the code and EXAMPLE WORKFLOWS!!! If you'd like to contribute:
- Fork the repository
- Create a new branch for your feature or bug fix
- Make your changes and commit them with descriptive commit messages
- Push your changes to your fork
- Submit a pull request to the main repository
This project is licensed under the MIT License - see the LICENSE file for details.
For issues, questions, or suggestions, please open an issue on the GitHub repository.