I spent some time with Gemini coding up a "lush" reverb using Reaper's .jsfx script. It's not terrible, but Valhalla doesn't have any worries.

It took a lot of iterations to get something usable, but maybe I'm using the wrong AI.


desc: Householder Cloud with Ducking
// Householder matrix with a more sensitive sidechain envelope.

slider1:-17<-60, 0, 1>Wet Level (dB)
slider2:-12<-60, 0, 1>Dry Level (dB)
slider3:0.85<0.1, 0.97, 0.001>Decay (Lushness)
slider4:0.5<0.1, 0.99, 0.01>Damping
slider5:0.3<0, 1, 0.01>Modulation
slider6:50<0, 100, 1>Ducking Amount (%)
slider7:150<10, 500, 1>Ducking Recovery (ms)

@init
B_SZ = 131072; B_MSK = B_SZ - 1;
freemem = 0;
d1=freemem; d2=freemem+B_SZ; d3=freemem+B_SZ*2; d4=freemem+B_SZ*3;
freemem += B_SZ*4;
ap1=freemem; ap1l=1453; ap2=freemem+1453; ap2l=1997;
freemem += 4000;

l1=3433; l2=4547; l3=5641; l4=6823;
env = 0;

@slider
wet = 10^(slider1/20); dry = 10^(slider2/20);
g = min(slider3, 0.96);
damp = slider4;
m_inc = slider5 * 0.0002;
// Ducking sensitivity: Higher % = deeper cut
duck_amt = slider6 / 100;
rel_coeff = exp(-1/(srate * (slider7/1000)));

@sample
in = (spl0 + spl1) * 0.5;

// 1. IMPROVED DUCKING ENVELOPE
// Use a faster attack and a more responsive curve
abs_in = abs(in);
abs_in > env ? env = abs_in : env = env * rel_coeff;
// Apply a scaling factor to 'env' so ducking is audible even at -18dB
duck_sense = min(1, env * 2.5);
duck_gain = 1.0 - (duck_sense * duck_amt);

// DC Block
in_dc = in - prev_in + 0.999 * in_dc; prev_in = in;

m_ph += m_inc;
mod = (sin(m_ph) + 1) * 8;

function read_itp(buf, ptr) (
i = floor(ptr); f = ptr - i;
buf[i & B_MSK] * (1-f) + buf[(i+1) & B_MSK] * f;
);

n1 = read_itp(d1, p1 - l1 - mod);
n2 = read_itp(d2, p2 - l2 + mod);
n3 = read_itp(d3, p3 - l3);
n4 = read_itp(d4, p4 - l4);

// 2. Householder Scattering
sum_n = (n1 + n2 + n3 + n4) * 0.5;
t1 = n1 - sum_n; t2 = n2 - sum_n;
t3 = n3 - sum_n; t4 = n4 - sum_n;

function diffuse(sig, buf, len, p) (
out = buf[p] - sig;
buf[p] = sig + out * 0.6;
out;
);
f1 = diffuse(t1, ap1, ap1l, pap1 = (pap1+1)%ap1l);
f2 = diffuse(t2, ap2, ap2l, pap2 = (pap2+1)%ap2l);

// 3. Damping and Limiting
lp1 = atan(f1 * (1-damp) + lp1 * damp);
lp2 = atan(f2 * (1-damp) + lp2 * damp);
lp3 = atan(t3 * (1-damp) + lp3 * damp);
lp4 = atan(t4 * (1-damp) + lp4 * damp);

d1[p1] = in_dc + lp1 * g;
d2[p2] = in_dc + lp2 * g;
d3[p3] = in_dc + lp3 * g;
d4[p4] = in_dc + lp4 * g;

p1=(p1+1)&B_MSK; p2=(p2+1)&B_MSK; p3=(p3+1)&B_MSK; p4=(p4+1)&B_MSK;

// 4. APPLY DUCKING TO WET SIGNAL
// Multiplying the final wet sum by duck_gain
out_L = (lp1 + lp3) * wet * duck_gain;
out_R = (lp2 + lp4) * wet * duck_gain;

spl0 = (spl0 * dry) + out_L;
spl1 = (spl1 * dry) + out_R;


I've spent far too much time trying to get a ray-casting version of a studio room simulation to work.


-- David Cuny

My virtual singer development blog
Vocal control, you say. Never heard of it. Is that some kind of ProTools thing?

BiaB 2025 | Windows 11 | Reaper | Way too many VSTis.