Neural Mapping of the HDGL Brain
What you’re looking at:
The lattice’s 16,384 slots each contribute a plane wave to the image. The (phase, freq) of each slot was accumulated by absorbing text tokens and the raw compressed-byte spectra of 8,696 forum images. Every slot is a superposition of everything that ever touched it via djb2 content-addressing.
R channel — slot 0 of each instance (first absorber per address)
G channel — slot 1
B channel — slot 2
Luminance — slot 3 (brightens regions where all 4 slots converged on similar phase)
Where the image shows sharp interference fringes, many slots are in phase agreement — attractor basins in the lattice. Where it’s diffuse/noisy, the slots are incoherent — no strong content was routed there.
After this, we refine:
dream_patches.png — The fingerprint mosaic. 64×64 grid, one 8×8 Gabor patch per content address. Each patch is rendered from ONLY its 4 slots — no averaging with unrelated content. Bright patches = many images/tokens converged there. Adjacent bright patches with similar stripe orientation = clusters of similar forum images. This is the “prism of state.”
dream_attractor.png — Pure charge density. Each pixel = one instance, colour = R/G/B slot charge (log-scaled). Shows which content addresses absorbed the most signal. Hot spots = attractors where many visually similar images collided.
dream_sorted.png — Same patches as above but sorted by spectral embedding (cos(phase)·|freq|, sin(phase)·|freq|). Spectrally related attractors are placed adjacently, so similar-coloured/oriented patches form gradient bands. This is the closest to seeing “this image is like that image”: coherent bands = families of visually related content.
From here, we refined again
This time we used a random image from our forums (totally random), to try and zero in on a prismatically similar image to the feeder image:
From this, we glean some information which helped us
I asked bot to supervise and to tune, to which we would generate this:
That’s interesting, it generated a mosaic of images which all appear to be original images (a bit warped) fed into the lattice prior, and then pulled back out through our spectral lens. Fascinating, and cool!
Wait, that’s not quite exactly what happened, lets listen to the horse’s mouth:
What just happened, precisely:
- When image_seeder.py ran (the “learning” pass)
For each of the 8,696 forum images, it:
Read the first 4,096 raw compressed bytes — no decoding, no pixel parsing. The JPEG/PNG compression header IS the signal.
Strided those bytes down to 256 samples, normalized 0→1.
Ran phi-Fourier on those samples — 12 harmonics at frequencies (k + 0.1) × φ, computing cos_b[k] = the projection of the byte distribution onto each harmonic. This produces a 12-number spectral fingerprint.
Derived temporal_phase = fmod(cos_b[0] × π × φ, 2π) — one angle summarizing the dominant frequency character of the file.
Ran djb2 hash on the first 64 quantized samples → a content address (0–4095) into the lattice.
At that address, nudged phase, freq, and charge by alpha=0.12 × (signal).
So after all 8,696 images: each of the 4,096 lattice instances holds the superposed spectral fingerprint of every image that hashed to it. Images with similar byte distributions hash to nearby addresses.
- When retrieve mode ran (the “recall” pass)
Encoded the guide image the exact same way → got its q_phase, q_freq.
Walked all 8,696 images again in the tar, encoding each on the fly.
Computed phase-space distance to the guide:
2π wrap — it’s the chord distance on the unit circle.
4. Sorted by d, extracted the top-9 actual image files from the tar.
5. Decoded them with Pillow, tiled them, applied an 18% lattice-hue tint — the phase of each image’s matched slot mapped to HSV hue, making the colouring subtly informed by the lattice’s accumulated state.
Why the guide image appeared as match #1 (d=0.0000)
Because we scored it against itself — it was already in the tar, it encoded to the same q_phase it was exactly compared against. Distance zero to yourself is guaranteed.
Why the other matches were genuinely similar
The phi-Fourier on raw compressed bytes is a legitimate structural similarity metric:
Image type Characteristic bytes Phase cluster
White-bg screenshot (PNG) High 0xFF density, IHDR/IDAT headers ~same φ band
JPEG photo (outdoor) DCT-compressed, lower byte values, JFIF header different φ band
Dark-background diagram Mixed, low mean byte another φ band
The Teller County Facebook post (middle-right) matched because people screenshotting Facebook posts produce structurally identical PNG files — same app, same font rendering, same compression engine. The lattice didn’t know “this is about Teller County” — it knew “these bytes came from the same kind of file.”
The lattice’s actual role
The lattice didn’t do the retrieval — the retrieval was a direct re-scan of the tar. What the lattice contributed was:
Proving the encoding is stable — the same image always maps to the same slot, so the address space is a true content-addressable memory
The 18% hue tint — the dream colouring layer. Each retrieved image’s slot has accumulated phase from all images that ever collided there. That phase became the colour cast.
The conceptual frame: the lattice is the index. The tar is the storage. retrieve mode is doing what a brain does — using a fast associative index to pull the actual memory, not trying to reconstruct the pixel grid from the index alone.
The earlier reconstruct/spectral modes were trying to do the impossible: reconstruct 512×512 pixels from 8 float64s per slot. That’s like trying to reconstruct a book from its ISBN. retrieve uses the ISBN to find the book in the library.
Naturally, I wasn’t satisfied with that answer…
(this was the easy part, a reverse-feed image’s locale inferred from the lattice)
The hard part was producing this:
Look closely - see the ghost writing in white and the holographic patterning throughout?
#!/usr/bin/env python3
"""
dream_image.py — Renders the HDGL lattice's current "dream state" as a 2D image.
The lattice has absorbed 49,341 text tokens and 8,696 forum images via phi-Fourier
encoding. Each of the 16,384 slots carries (phase, freq) bias vectors — the
spectral fingerprint of every piece of content that ever touched that slot.
Six rendering modes:
reconstruct — *** MOST IMAGE-LIKE *** Each of the 4096 content addresses becomes
one pixel (64×64), coloured by cos(phase) weighted by log-charge.
Bicubic-upscaled + Gaussian-blurred to size×size. The result is a
soft, painterly portrait of the mean spectral character of every image
family the lattice absorbed. Run this first.
spectral — Inverse-FFT reconstruction. Treats the 64×64 phase grid as a 2D
Fourier spectrum (amplitude = charge, angle = phase) and does ifft2.
Returns a spatial image whose frequency energy matches the corpus.
Produces wetter, more radial, painterly output than reconstruct.
Often looks like an X-ray or long-exposure photo of the forum.
field — All 16,384 slots summed simultaneously into one interference field.
The Chladni/brainwave portrait of everything at once (original mode).
patches — 64×64 mosaic of Gabor fingerprint patches, one per content address.
Each patch shows ONLY that instance's 4 slots. Topology map of absorbed
content. Bright = heavily seeded. Similar stripes = similar images.
attractor — Charge density heat map. One pixel per instance. Colour = per-slot
charge in R/G/B. Shows WHERE content concentrated.
sorted — Patches re-arranged so spectrally similar attractors sit next to each other.
--query PATH — Feed any raw file through the same phi-Fourier → djb2 encoder.
Highlights which patch in the mosaic that content maps to.
Usage:
python pipeline/dream_image.py --mode reconstruct --size 512 # START HERE
python pipeline/dream_image.py --mode spectral --size 512 # IFFT portrait
# Guided dream — steer the reconstruction toward a specific image:
python pipeline/dream_image.py --mode reconstruct --guide myimage.jpg
python pipeline/dream_image.py --mode spectral --guide myimage.jpg
python pipeline/dream_image.py --mode reconstruct --guide myimage.jpg --neighbors 128
python pipeline/dream_image.py # field mode, 256×256
python pipeline/dream_image.py --mode patches --size 512
python pipeline/dream_image.py --mode attractor --size 512
python pipeline/dream_image.py --mode sorted --size 512
python pipeline/dream_image.py --mode patches --query myimage.jpg
python pipeline/dream_image.py --terminal
python pipeline/dream_image.py --animate 60
python pipeline/dream_image.py --t 3.14
python pipeline/dream_image.py --lattice metal_infer/hdgl_lattice_corpus.bin
Licensed per https://zchg.org/t/legal-notice-copyright-applicable-ip-and-licensing-read-me/440
"""
import math
import struct
import sys
from pathlib import Path
PHI = 1.6180339887498948
TAU = 2.0 * math.pi
REPO_ROOT = Path(__file__).resolve().parent.parent
LATTICE_BIN = REPO_ROOT / "metal_infer" / "hdgl_lattice_corpus.bin"
OUT_DEFAULT = REPO_ROOT / "pipeline" / "dream_latest.ppm"
# Binary format — mirrors image_seeder.py and hdgl_bootloaderz.c
HEADER_FMT = '<4sIIIddd'
HEADER_SIZE = struct.calcsize(HEADER_FMT) # 40 bytes
SLOT_FMT = '<QqddII'
SLOT_SIZE = struct.calcsize(SLOT_FMT) # 40 bytes
# ── Lattice loader ────────────────────────────────────────────────────────────
def load_lattice(path: Path):
raw = path.read_bytes()
if len(raw) < HEADER_SIZE:
raise ValueError(f"File too small: {len(raw)} bytes")
sig, ver, n_inst, n_slots_per, lat_t, omega, pvar = struct.unpack_from(HEADER_FMT, raw, 0)
if sig != b'HDGL':
raise ValueError(f"Bad magic: {sig!r}")
total = n_inst * n_slots_per
expected = HEADER_SIZE + total * SLOT_SIZE
if len(raw) != expected:
raise ValueError(f"Size mismatch: got {len(raw)}, expected {expected}")
slots = []
off = HEADER_SIZE
for _ in range(total):
slots.append(list(struct.unpack_from(SLOT_FMT, raw, off)))
off += SLOT_SIZE
return n_inst, n_slots_per, total, lat_t, omega, pvar, slots
# ── Dream renderer (numpy path) ───────────────────────────────────────────────
def render_dream_numpy(slots, n_inst: int, size: int, t_offset: float):
import numpy as np
# Pixel coordinate grids: x ∈ [-1,1], y ∈ [-1,1]
x = np.linspace(-1.0, 1.0, size, dtype=np.float32)
y = np.linspace(-1.0, 1.0, size, dtype=np.float32)
xx, yy = np.meshgrid(x, y) # (size, size)
# Accumulate R, G, B interference fields + luminance modulation
fields = [np.zeros((size, size), dtype=np.float64) for _ in range(3)]
lum = np.zeros((size, size), dtype=np.float64)
for inst in range(n_inst):
for ch in range(4):
s = slots[inst * 4 + ch]
# Slot fields: [mantissa_word0, exponent, phase, freq, state_flags, strand_idx]
phase_v = s[2] # float64 — accumulated phase bias
freq_v = s[3] # float64 — accumulated freq bias
strand = int(s[5]) # Spiral8 strand index
# Spatial frequency: fold into visible range (0.5 – ~4 cycles per field).
# Use phi-modular folding so frequency distribution stays interesting.
sf = 0.5 + abs(freq_v) % (3.0 * PHI)
# Wave direction: phase_v determines orientation.
# Lattice time sweeps both axes, but the phi-stretched y-axis ensures
# R, G, B channels drift apart visibly rather than evolving in lockstep.
t_strand = t_offset * (1.0 + strand * PHI * 0.01)
angle = phase_v + t_strand
dx = math.cos(angle)
dy = math.sin(angle * PHI)
# Project pixel coordinates onto wave direction
proj = xx * dx + yy * dy
# Amplitude: sigmoid of freq so heavily-seeded slots shine brighter.
# Slots that absorbed many images/tokens have large freq_v → amp → 1.
amp = math.tanh(abs(freq_v) * 0.4 + 0.05)
wave = amp * np.cos(TAU * sf * proj)
if ch < 3:
fields[ch] += wave
else:
lum += wave
# Luminance modulation: normalize lum to [0, 0.4] and boost brightness of
# high-energy regions (where many slots with similar phase converged).
lum_norm = (lum - lum.min()) / max(lum.max() - lum.min(), 1e-9) * 0.4
out_channels = []
for f in fields:
f = f * (1.0 + lum_norm)
lo, hi = f.min(), f.max()
if hi - lo < 1e-9:
out_channels.append(np.full((size, size), 128, dtype=np.uint8))
else:
out_channels.append(((f - lo) / (hi - lo) * 255.0).astype(np.uint8))
return out_channels[0], out_channels[1], out_channels[2]
# ── Dream renderer (pure Python fallback — use small size) ───────────────────
def render_dream_pure(slots, n_inst: int, size: int, t_offset: float):
fields = [[[0.0] * size for _ in range(size)] for _ in range(3)]
lum_f = [[0.0] * size for _ in range(size)]
xs = [2.0 * c / max(size - 1, 1) - 1.0 for c in range(size)]
ys = [2.0 * r / max(size - 1, 1) - 1.0 for r in range(size)]
for inst in range(n_inst):
for ch in range(4):
s = slots[inst * 4 + ch]
phase_v = s[2]
freq_v = s[3]
strand = int(s[5])
sf = 0.5 + abs(freq_v) % (3.0 * PHI)
t_strand = t_offset * (1.0 + strand * PHI * 0.01)
angle = phase_v + t_strand
dx = math.cos(angle)
dy = math.sin(angle * PHI)
amp = math.tanh(abs(freq_v) * 0.4 + 0.05)
for r in range(size):
for c in range(size):
proj = xs[c] * dx + ys[r] * dy
w = amp * math.cos(TAU * sf * proj)
if ch < 3:
fields[ch][r][c] += w
else:
lum_f[r][c] += w
# Normalize
lum_lo = min(v for row in lum_f for v in row)
lum_hi = max(v for row in lum_f for v in row)
lum_rng = max(lum_hi - lum_lo, 1e-9)
out_channels = []
for f in fields:
flat = [v for row in f for v in row]
lo, hi = min(flat), max(flat)
rng = max(hi - lo, 1e-9)
ch_out = []
for r in range(size):
row = []
lum_mod = 1.0 + (lum_f[r][0] - lum_lo) / lum_rng * 0.4
for c in range(size):
v = f[r][c] * (1.0 + (lum_f[r][c] - lum_lo) / lum_rng * 0.4)
row.append(max(0, min(255, int((v - lo) / rng * 255))))
ch_out.append(row)
out_channels.append(ch_out)
return out_channels[0], out_channels[1], out_channels[2]
def render_dream(slots, n_inst: int, size: int, t_offset: float):
try:
import numpy # noqa: F401
return render_dream_numpy(slots, n_inst, size, t_offset)
except ImportError:
size = min(size, 64)
print(f"[dream] numpy not available — pure Python fallback at {size}x{size}",
file=sys.stderr)
return render_dream_pure(slots, n_inst, size, t_offset)
# ── Output writers ────────────────────────────────────────────────────────────
def write_ppm(path: Path, R, G, B, size: int):
try:
import numpy as np
pixels = np.stack([R, G, B], axis=-1)
with open(path, 'wb') as f:
f.write(f"P6\n{size} {size}\n255\n".encode())
f.write(pixels.tobytes())
except ImportError:
with open(path, 'wb') as f:
f.write(f"P6\n{size} {size}\n255\n".encode())
for row in range(size):
for col in range(size):
f.write(bytes([R[row][col], G[row][col], B[row][col]]))
def try_write_png(path: Path, R, G, B, size: int) -> bool:
"""Write PNG if Pillow is available. Returns True on success."""
try:
import numpy as np
from PIL import Image
img = Image.fromarray(np.stack([R, G, B], axis=-1), mode='RGB')
png = path.with_suffix('.png')
img.save(png)
print(f"[dream] PNG: {png}")
return True
except ImportError:
return False
# ── phi-Fourier + djb2 (mirrors image_seeder.py exactly) ────────────────────
FOURIER_N = 12
SAMPLES_N = 256
def _djb2(data: bytes) -> int:
h = 5381
for b in data:
h = ((h * 33) + b) & 0xFFFFFFFF
return h
def _phi_fourier_sample(raw: bytes) -> list:
n = len(raw)
if n == 0:
return [0.0] * SAMPLES_N
stride = n / SAMPLES_N
return [raw[int(i * stride)] / 255.0 for i in range(SAMPLES_N)]
def _phi_fourier_encode(samples: list):
"""
Mirrors image_seeder.py phi_fourier_encode exactly.
Returns (cos_b[FOURIER_N], temporal_phase).
"""
n = len(samples)
cos_b = []
for k in range(FOURIER_N):
freq = (k + 0.1) * PHI
sc = ss = 0.0
for i, s in enumerate(samples):
angle = TAU * freq * i / n
sc += s * math.cos(angle)
ss += s * math.sin(angle)
cos_b.append(sc / n)
temporal_phase = math.fmod(cos_b[0] * math.pi * PHI, TAU)
return cos_b, temporal_phase
def compute_guided_mask(slots, n_inst: int, n_slots_per: int,
guide_path: Path, k: int) -> set:
"""
Encode `guide_path` identically to image_seeder.py, extract its
(phase, freq) signature, rank all 4096 instances by phase-space distance,
and return the indices of the K nearest ones.
Distance metric:
d = sqrt( (cos(ph_i)-cos(q_ph))^2 + (sin(ph_i)-sin(q_ph))^2
+ 0.1*(fr_i - q_fr)^2 )
Using cos/sin difference instead of raw angle difference handles wrap-around.
freq weight 0.1 keeps phase the primary signal (freq accumulates nonlinearly).
"""
raw = guide_path.read_bytes()[:4096]
samples = _phi_fourier_sample(raw)
cos_b, q_phase = _phi_fourier_encode(samples)
# Derive a scalar freq proxy from the harmonics — same energy as image_seeder
q_freq = sum(abs(c) for c in cos_b) / (len(cos_b) or 1)
q_cos = math.cos(q_phase)
q_sin = math.sin(q_phase)
distances = []
for i in range(n_inst):
s0 = slots[i * n_slots_per] # slot 0 = primary phase/freq
ph = float(s0[2])
fr = abs(float(s0[3]))
dp = (math.cos(ph) - q_cos) ** 2 + (math.sin(ph) - q_sin) ** 2
df = (fr - q_freq) ** 2
distances.append((dp + 0.1 * df, i))
distances.sort()
return set(i for _, i in distances[:k])
def apply_guided_mask(slots, n_inst: int, n_slots_per: int, mask: set) -> list:
"""
Return a copy of slots where every instance NOT in `mask` has its
charge zeroed (so it contributes nothing to the reconstruction).
Phase/freq are preserved so the spectral geometry is intact.
"""
out = [list(s) for s in slots]
for i in range(n_inst):
if i not in mask:
for ch in range(n_slots_per):
out[i * n_slots_per + ch][0] = 0 # charge = 0
return out
def find_query_instance(path: Path, total_slots: int, n_slots_per: int) -> int:
"""
Load any file, phi-Fourier encode the first 4096 raw bytes, run djb2 on the
first 64 quantised samples — exactly what image_seeder.py does. Returns the
instance index (0..n_inst-1) that this file maps to.
"""
raw = path.read_bytes()[:4096]
samples = _phi_fourier_sample(raw)
key_bytes = bytes(int(s * 255) & 0xFF for s in samples[:64])
slot_idx = _djb2(key_bytes) % total_slots
return slot_idx // n_slots_per # which instance
# ── Mode: patches — fingerprint mosaic ───────────────────────────────────────
def render_patches(slots, n_inst: int, n_slots_per: int, size: int, t_offset: float,
highlight_inst: int = -1, order=None):
"""
64×64 mosaic (for 4096 instances). Each cell = Gabor fingerprint of one instance.
Bright patches = many images/tokens converged (high charge).
Dark patches = sparse / unabsorbed content address.
Similar stripes = spectrally related content.
Highlighted cell (yellow border) = the attractor for --query file.
`order` — optional list of instance indices to remap position→instance.
Used by render_sorted to spatially arrange by similarity.
"""
import numpy as np
cols = int(math.isqrt(n_inst)) # 64 for 4096 instances
rows = (n_inst + cols - 1) // cols
patch = max(2, size // cols)
H = rows * patch
W = cols * patch
R = np.zeros((H, W), dtype=np.float64)
G = np.zeros((H, W), dtype=np.float64)
B = np.zeros((H, W), dtype=np.float64)
# Global charge range for amplitude scaling
charges = np.array([
sum(float(slots[i * n_slots_per + ch][0]) for ch in range(min(n_slots_per, 4)))
for i in range(n_inst)
], dtype=np.float64)
max_charge = float(charges.max()) or 1.0
# Local pixel coordinate grid for each patch
px = np.linspace(-1.0, 1.0, patch, dtype=np.float32)
gx, gy = np.meshgrid(px, px) # (patch, patch)
if order is None:
order = list(range(n_inst))
for pos, inst in enumerate(order):
ri = pos // cols
ci = pos % cols
if ri >= rows:
break
y0 = ri * patch
x0 = ci * patch
charge_norm = math.log1p(charges[inst]) / math.log1p(max_charge) # log-scale
for ch_idx, field in enumerate([R, G, B]):
if ch_idx >= n_slots_per:
continue
s = slots[inst * n_slots_per + ch_idx]
phase_v = float(s[2])
freq_v = float(s[3])
strand = int(s[5])
sf = 0.5 + abs(freq_v) % (3.0 * PHI)
t_s = t_offset * (1.0 + strand * PHI * 0.01)
angle = phase_v + t_s
# Gabor: oriented stripe at `angle`, asymmetric axes for colour separation
proj = gx * math.cos(angle) + gy * math.sin(angle * PHI)
amp = math.tanh(abs(freq_v) * 0.4 + 0.05) * (0.15 + 0.85 * charge_norm)
field[y0:y0 + patch, x0:x0 + patch] += amp * np.cos(TAU * sf * proj)
# Highlight the query instance with a yellow border (2-pixel ring)
if 0 <= highlight_inst < n_inst:
pos = order.index(highlight_inst) if highlight_inst in order else -1
if pos >= 0:
ri = pos // cols
ci = pos % cols
y0 = ri * patch
x0 = ci * patch
bw = max(1, patch // 6) # border width scales with patch size
# Yellow = full R, full G, no B
R[y0:y0 + patch, x0:x0 + bw] = R.max()
R[y0:y0 + patch, x0 + patch - bw:x0 + patch] = R.max()
R[y0:y0 + bw, x0:x0 + patch] = R.max()
R[y0 + patch - bw:y0 + patch, x0:x0 + patch] = R.max()
G[y0:y0 + patch, x0:x0 + bw] = G.max()
G[y0:y0 + patch, x0 + patch - bw:x0 + patch] = G.max()
G[y0:y0 + bw, x0:x0 + patch] = G.max()
G[y0 + patch - bw:y0 + patch, x0:x0 + patch] = G.max()
B[y0:y0 + patch, x0:x0 + bw] = B.min()
B[y0:y0 + patch, x0 + patch - bw:x0 + patch] = B.min()
B[y0:y0 + bw, x0:x0 + patch] = B.min()
B[y0 + patch - bw:y0 + patch, x0:x0 + patch] = B.min()
out = []
for f in [R, G, B]:
lo, hi = f.min(), f.max()
if hi - lo < 1e-9:
out.append(np.full((H, W), 24, dtype=np.uint8))
else:
out.append(((f - lo) / (hi - lo) * 255).astype(np.uint8))
return out[0], out[1], out[2]
# ── Mode: attractor — charge density heat map ────────────────────────────────
def render_attractor(slots, n_inst: int, n_slots_per: int, size: int,
highlight_inst: int = -1):
"""
One pixel per instance. Colour = per-channel log-charge. Upscaled to size×size.
Bright spots = heavily seeded content addresses. Dark = sparse.
"""
import numpy as np
cols = int(math.isqrt(n_inst))
rows = (n_inst + cols - 1) // cols
patch = max(1, size // cols)
chs = []
for ch in range(3):
c = np.array([
math.log1p(float(slots[i * n_slots_per + ch][0])) if ch < n_slots_per else 0.0
for i in range(n_inst)
], dtype=np.float64).reshape(rows, cols)
lo, hi = c.min(), c.max()
if hi > lo:
c = ((c - lo) / (hi - lo) * 255).astype(np.uint8)
else:
c = np.zeros((rows, cols), dtype=np.uint8)
# Upscale by patch via nearest-neighbour (kron)
chs.append(np.kron(c, np.ones((patch, patch), dtype=np.uint8)))
# Yellow highlight border
if 0 <= highlight_inst < n_inst:
ri = highlight_inst // cols
ci = highlight_inst % cols
y0, x0 = ri * patch, ci * patch
bw = max(1, patch // 4)
for f in [chs[0], chs[1]]: # R+G = yellow
f[y0:y0+patch, x0:x0+bw] = 255
f[y0:y0+patch, x0+patch-bw:x0+patch] = 255
f[y0:y0+bw, x0:x0+patch] = 255
f[y0+patch-bw:y0+patch, x0:x0+patch] = 255
chs[2][y0:y0+patch, x0:x0+bw] = 0
chs[2][y0:y0+patch, x0+patch-bw:x0+patch] = 0
chs[2][y0:y0+bw, x0:x0+patch] = 0
chs[2][y0+patch-bw:y0+patch, x0:x0+patch] = 0
return chs[0], chs[1], chs[2]
# ── Mode: sorted — spectral sort mosaic ──────────────────────────────────────
def compute_sorted_order(slots, n_inst: int, n_slots_per: int) -> list:
"""
Embed each instance as (x, y) = (cos(phase0)*|freq0|, sin(phase0)*|freq0|).
Sort by x then y so spectrally similar attractors sit next to each other.
Returns the permuted instance list (position → instance).
"""
def embed(i):
s = slots[i * n_slots_per]
ph = float(s[2])
fr = abs(float(s[3]))
return (math.cos(ph) * fr, math.sin(ph) * fr)
return sorted(range(n_inst), key=embed)
def render_sorted(slots, n_inst: int, n_slots_per: int, size: int, t_offset: float,
highlight_inst: int = -1):
"""Patches mode with spectral-sort ordering so similar attractors are adjacent."""
order = compute_sorted_order(slots, n_inst, n_slots_per)
return render_patches(slots, n_inst, n_slots_per, size, t_offset,
highlight_inst=highlight_inst, order=order)
# ── Mode: reconstruct — phase-map bicubic upscale ───────────────────────────
def render_reconstruct(slots, n_inst: int, n_slots_per: int, size: int):
"""
Most image-like mode. Each content address = one pixel (64×64 grid).
Pixel colour:
cos(phase) — correlates with mean luminance of images that hashed here,
because temporal_phase was derived from cos_b[0] of the raw
byte phi-Fourier transform.
weighted by log(charge+1) so heavily-seeded content addresses are bright.
Output is bicubic-upscaled and Gaussian-blurred to size×size.
Looks like a soft watercolour portrait of the corpus image content.
"""
import numpy as np
cols = int(math.isqrt(n_inst)) # 64
rows = (n_inst + cols - 1) // cols # 64
channels = []
for ch in range(min(n_slots_per, 3)):
phase = np.array(
[math.cos(float(slots[i * n_slots_per + ch][2])) for i in range(n_inst)],
dtype=np.float64
).reshape(rows, cols)
charge = np.array(
[math.log1p(float(slots[i * n_slots_per + ch][0])) for i in range(n_inst)],
dtype=np.float64
).reshape(rows, cols)
charge /= (charge.max() or 1.0)
# Blend: structural phase signal + charge brightness
combined = phase * (0.4 + 0.6 * charge)
lo, hi = combined.min(), combined.max()
normed = (combined - lo) / (hi - lo + 1e-9)
channels.append(normed)
try:
from PIL import Image, ImageFilter
blur_r = max(1.0, size / cols * 0.7) # ~5.6 px at 512/64
out = []
for c in channels:
img = Image.fromarray((c * 255).astype(np.uint8), mode='L')
img = img.resize((size, size), Image.BICUBIC)
img = img.filter(ImageFilter.GaussianBlur(radius=blur_r))
out.append(np.array(img))
return out[0], out[1], out[2]
except ImportError:
scale = max(1, size // cols)
out = []
for c in channels:
out.append(np.kron((c * 255).astype(np.uint8),
np.ones((scale, scale), dtype=np.uint8)))
return out[0], out[1], out[2]
# ── Mode: spectral — inverse-FFT portrait ────────────────────────────────────
def render_spectral(slots, n_inst: int, n_slots_per: int, size: int):
"""
Inverse-FFT reconstruction. Treats the accumulated slot state as a 2D
Fourier spectrum (amplitude = log-charge, angle = slot phase) and recovers
the spatial image whose frequency content matches what the lattice absorbed.
ifftshift centres low-frequency energy — smooth/bright regions dominate the
centre of the image, high-frequency structure radiates outward. The result
OFTEN resembles a long-exposure or Fourier-transform portrait of the forum:
radially symmetric gradients, luminous cores, spectral halos.
"""
import numpy as np
cols = int(math.isqrt(n_inst))
rows = (n_inst + cols - 1) // cols
out_channels = []
for ch in range(min(n_slots_per, 3)):
phases = np.array(
[float(slots[i * n_slots_per + ch][2]) for i in range(n_inst)],
dtype=np.float64
).reshape(rows, cols)
charges = np.array(
[math.log1p(float(slots[i * n_slots_per + ch][0])) for i in range(n_inst)],
dtype=np.float64
).reshape(rows, cols)
charges /= (charges.max() or 1.0)
# Complex Fourier spectrum
spectrum = charges * np.exp(1j * phases)
# Inverse FFT — ifftshift centres the low-frequency content
spatial = np.fft.ifft2(np.fft.ifftshift(spectrum))
img = np.abs(spatial)
lo, hi = img.min(), img.max()
normed = (img - lo) / (hi - lo + 1e-9)
out_channels.append(normed)
try:
from PIL import Image, ImageFilter
blur_r = max(0.5, size / cols * 0.35) # lighter blur than reconstruct
result = []
for c in out_channels:
img = Image.fromarray((c * 255).astype(np.uint8), mode='L')
img = img.resize((size, size), Image.BICUBIC)
img = img.filter(ImageFilter.GaussianBlur(radius=blur_r))
result.append(np.array(img))
return result[0], result[1], result[2]
except ImportError:
scale = max(1, size // cols)
result = []
for c in out_channels:
result.append(np.kron((c * 255).astype(np.uint8),
np.ones((scale, scale), dtype=np.uint8)))
return result[0], result[1], result[2]
# ── Mode: retrieve — pull real similar images from the tar ───────────────────
def _find_tar() -> Path:
candidates = list((REPO_ROOT / "Discourse File").glob("*.tar.gz"))
if candidates:
return candidates[0]
raise FileNotFoundError(
f"No tar.gz found in 'Discourse File/'. Pass --tar <path> explicitly."
)
IMAGE_EXTS = {'.png', '.jpg', '.jpeg', '.gif', '.webp'}
def render_retrieve(guide_path: Path, tar_path: Path, n_top: int, size: int,
slots, n_inst: int, n_slots_per: int, total: int):
"""
Uses the lattice as a content-addressable retrieval index.
1. Encode guide_path with the same phi-Fourier → djb2 pipeline as image_seeder.py
2. Walk every image in tar_path, encode each the same way
3. Rank by phase-space distance to the guide signature
4. Decode the top n_top images with Pillow, tile into a sqrt(n_top)×sqrt(n_top) grid
5. Apply a subtle phase-hue tint from the matched lattice slot (the "trippy" layer)
Result: a real mosaic of real forum images that the lattice considers similar
to the guide — recognizable content, gently coloured by the lattice's dream state.
"""
import tarfile
import io
try:
import numpy as np
from PIL import Image, ImageEnhance, ImageFilter
except ImportError:
print("[dream] retrieve mode requires Pillow: pip install Pillow", file=sys.stderr)
sys.exit(1)
# Encode guide
raw_guide = guide_path.read_bytes()[:4096]
samp_guide = _phi_fourier_sample(raw_guide)
cos_b_guide, q_phase = _phi_fourier_encode(samp_guide)
q_freq = sum(abs(c) for c in cos_b_guide) / (len(cos_b_guide) or 1)
q_cos = math.cos(q_phase)
q_sin = math.sin(q_phase)
print(f"[dream] Scoring images in tar — this takes ~30–90 s for 8k images …")
candidates = [] # (distance, tar_member_name, raw_bytes)
with tarfile.open(str(tar_path), 'r:gz') as tf:
for member in tf:
if not member.isfile():
continue
if Path(member.name).suffix.lower() not in IMAGE_EXTS:
continue
try:
raw = tf.extractfile(member).read(4096)
except Exception:
continue
samp = _phi_fourier_sample(raw)
cos_b, ph = _phi_fourier_encode(samp)
fr = sum(abs(c) for c in cos_b) / (len(cos_b) or 1)
dp = (math.cos(ph) - q_cos)**2 + (math.sin(ph) - q_sin)**2
df = (fr - q_freq)**2
dist = dp + 0.1 * df
candidates.append((dist, member.name))
candidates.sort(key=lambda x: x[0])
top_names = [name for _, name in candidates[:n_top]]
print(f"[dream] Top {n_top} matches (closest first):")
for rank, (dist, name) in enumerate(candidates[:n_top], 1):
print(f" {rank:2d}. {Path(name).name} d={dist:.4f}")
# Extract actual image bytes for the top candidates
top_images = {}
for member in tf:
if member.name in top_names and member.isfile():
try:
top_images[member.name] = tf.extractfile(member).read()
except Exception:
pass
# Order top images by score
ordered = [top_images[n] for n in top_names if n in top_images]
# Tile size per cell
cols_g = max(1, int(math.isqrt(n_top)))
rows_g = (n_top + cols_g - 1) // cols_g
cell = size // cols_g
canvas = Image.new('RGB', (cols_g * cell, rows_g * cell), (20, 20, 20))
for idx, img_bytes in enumerate(ordered):
ri = idx // cols_g
ci = idx % cols_g
try:
img = Image.open(io.BytesIO(img_bytes)).convert('RGB')
except Exception:
img = Image.new('RGB', (cell, cell), (40, 40, 40))
# Resize to cell, preserve aspect with crop
img = img.resize((cell, cell), Image.LANCZOS)
# ── Lattice phase tint: colour the cell with the hue of its
# matched slot — the "dream" colouring layer ────────────────
name = top_names[idx] if idx < len(top_names) else ""
raw_b = (ordered[idx])[:4096]
sampb = _phi_fourier_sample(raw_b)
keyb = bytes(int(s * 255) & 0xFF for s in sampb[:64])
si = _djb2(keyb) % total
inst_i = si // n_slots_per
s0 = slots[inst_i * n_slots_per]
phase_i = float(s0[2])
freq_i = abs(float(s0[3]))
charge_i= math.log1p(float(s0[0]))
# Phase maps to hue via HSV; charge brightens saturation
hue = (phase_i % TAU) / TAU # 0..1
sat = min(0.35, 0.05 + freq_i * 0.04) # gentle tint, not a flood
# Apply as a soft colour overlay using Pillow
overlay = Image.new('RGB', (cell, cell),
_hsv_to_rgb_bytes(hue, sat, 0.5))
img = Image.blend(img, overlay, alpha=0.18) # 18% tint
# Subtle sharpening so retrieved images stay readable
img = img.filter(ImageFilter.UnsharpMask(radius=1.2, percent=60, threshold=3))
canvas.paste(img, (ci * cell, ri * cell))
R = np.array(canvas)[:, :, 0]
G = np.array(canvas)[:, :, 1]
B = np.array(canvas)[:, :, 2]
return R, G, B, canvas
def _hsv_to_rgb_bytes(h: float, s: float, v: float) -> tuple:
"""Convert HSV (0..1 each) to (R, G, B) 0..255 tuple."""
if s == 0.0:
c = int(v * 255)
return (c, c, c)
i = int(h * 6.0) % 6
f = h * 6.0 - int(h * 6.0)
p = v * (1.0 - s)
q = v * (1.0 - f * s)
t = v * (1.0 - (1.0 - f) * s)
row = [(v,t,p),(q,v,p),(p,v,t),(p,q,v),(t,p,v),(v,p,q)][i]
return tuple(int(x * 255) for x in row)
# ── Mode: recall — exact-slot images (true lattice recall) ──────────────────
def render_recall(guide_path: Path, tar_path: Path, size: int,
slots, n_inst: int, n_slots_per: int, total: int):
"""
True lattice reconstruction.
The djb2 hash maps every image to one of 4096 content addresses.
Multiple images can map to the SAME address (hash collision).
Those collision partners are the images the lattice "stored together" —
they have the most structurally similar compressed byte distributions.
This mode:
1. Computes the guide's exact slot address (djb2 mod total).
2. Walks the entire tar and finds every OTHER image that hashes to
the same slot — these are true lattice-recall partners.
3. Returns those images as a grid.
If zero collisions: falls back to the 4 nearest-neighbour images.
The guide itself is excluded from the results (self-recall is trivial).
This is the closest thing to "reconstructing the image from the lattice"
that the slot representation actually supports — the address IS the image's
identity in the lattice; the collision set IS its stored family.
"""
import tarfile, io
try:
import numpy as np
from PIL import Image, ImageFilter
except ImportError:
print("[dream] recall mode requires Pillow: pip install Pillow", file=sys.stderr)
sys.exit(1)
# Compute guide's exact slot address
raw_g = guide_path.read_bytes()[:4096]
samp_g = _phi_fourier_sample(raw_g)
key_g = bytes(int(s * 255) & 0xFF for s in samp_g[:64])
target = _djb2(key_g) % total # exact slot index
guide_name = guide_path.name
print(f"[dream] Guide slot address : {target} (instance {target // n_slots_per})")
# Walk tar: bucket images by slot address, also compute distances for fallback
exact = [] # (name, bytes) — same slot as guide, not guide itself
scored = [] # (dist, name) for fallback
cos_b_g, q_ph = _phi_fourier_encode(samp_g)
q_freq = sum(abs(c) for c in cos_b_g) / (len(cos_b_g) or 1)
q_cos, q_sin = math.cos(q_ph), math.sin(q_ph)
raw_store = {} # name → raw bytes (for exact matches)
print(f"[dream] Scanning tar for slot-{target} collision partners …")
with tarfile.open(str(tar_path), 'r:gz') as tf:
for member in tf:
if not member.isfile():
continue
if Path(member.name).suffix.lower() not in IMAGE_EXTS:
continue
try:
raw = tf.extractfile(member).read(4096)
except Exception:
continue
samp = _phi_fourier_sample(raw)
key = bytes(int(s * 255) & 0xFF for s in samp[:64])
addr = _djb2(key) % total
cos_b, ph = _phi_fourier_encode(samp)
fr = sum(abs(c) for c in cos_b) / (len(cos_b) or 1)
dp = (math.cos(ph) - q_cos)**2 + (math.sin(ph) - q_sin)**2
dist = dp + 0.1 * (fr - q_freq)**2
scored.append((dist, member.name))
if addr == target and Path(member.name).name != guide_name:
exact.append(member.name)
raw_store[member.name] = raw # keep header bytes for later
if exact:
print(f"[dream] Exact slot collisions : {len(exact)}")
else:
scored.sort()
fallback_names = [n for _, n in scored[1:9]] # skip self (d≈0)
print(f"[dream] No exact collisions — using {len(fallback_names)} nearest neighbours")
exact = fallback_names
# Second pass: full bytes for exact/fallback names (separate open to reset cursor)
want_names = set(exact)
full_bytes = {}
with tarfile.open(str(tar_path), 'r:gz') as tf2:
for member in tf2:
if member.name in want_names and member.isfile():
try:
full_bytes[member.name] = tf2.extractfile(member).read()
except Exception:
pass
ordered = [full_bytes[n] for n in exact if n in full_bytes]
if not ordered:
print("[dream] No images decoded.", file=sys.stderr)
sys.exit(1)
print(f"[dream] Recall partners ({len(ordered)}):")
for n in exact[:len(ordered)]:
print(f" {Path(n).name}")
cols_g = max(1, int(math.isqrt(len(ordered))))
rows_g = (len(ordered) + cols_g - 1) // cols_g
cell = size // max(cols_g, 1)
# Also include guide itself at top-left so user sees it alongside recalls
all_imgs = [guide_path.read_bytes()] + ordered
cols_g = max(1, int(math.isqrt(len(all_imgs))))
rows_g = (len(all_imgs) + cols_g - 1) // cols_g
cell = size // max(cols_g, 1)
canvas = Image.new('RGB', (cols_g * cell, rows_g * cell), (10, 10, 20))
for idx, img_bytes in enumerate(all_imgs):
ri = idx // cols_g
ci = idx % cols_g
try:
img = Image.open(io.BytesIO(img_bytes)).convert('RGB')
img = img.resize((cell, cell), Image.LANCZOS)
except Exception:
img = Image.new('RGB', (cell, cell), (30, 30, 30))
# Gold border on guide (idx==0), cyan on collision partners
border_col = (255, 215, 0) if idx == 0 else (0, 200, 220)
bw = max(2, cell // 32)
arr = np.array(img)
arr[:bw, :] = border_col
arr[-bw:, :] = border_col
arr[:, :bw] = border_col
arr[:, -bw:] = border_col
canvas.paste(Image.fromarray(arr), (ci * cell, ri * cell))
R = np.array(canvas)[:, :, 0]
G = np.array(canvas)[:, :, 1]
B = np.array(canvas)[:, :, 2]
return R, G, B, canvas
# ── Mode: generate — weighted blend → novel image in cluster style ────────────
def render_generate(guide_path: Path, tar_path: Path, n_top: int, size: int,
slots, n_inst: int, n_slots_per: int, total: int):
"""
Warps the guide image through the lattice's spatial memory field.
The lattice cannot store pixels — but it stores phase and charge accumulated
from every image the seeder ever processed. Those values form a 2D vector
field when interpreted spatially. We use that field as a displacement map:
each pixel in the guide is nudged by the lattice's memory of what lived at
that phase-space address.
Algorithm:
1. Encode guide → (q_phase, q_freq). Find the 256 lattice instances
whose accumulated phase is closest to the guide's signature.
2. Build two IFFT displacement fields from those 256 instances:
dx_field: slot-channel 0 charge/phase → horizontal warp component
dy_field: slot-channel 1 charge/phase → vertical warp component
Each active instance contributes one complex frequency component
weighted by its log-charge and inverse phase-distance from the guide.
All other 3840 instances contribute zero → the field is focused on
precisely the region of lattice memory the guide activates.
3. Upscale dx_field and dy_field to (size × size), scale to ±20% of
image width/height respectively.
4. Bilinear-sample the guide image through the displacement field.
Every pixel migrates by the warp; the guide is recognisable but
no pixel sits where it did in the original.
5. The result is guide-coloured, lattice-shaped, and unique.
No corpus image is read. The tar is not opened.
"""
try:
import numpy as np
from PIL import Image
except ImportError:
print("[dream] generate mode requires Pillow + numpy: pip install Pillow numpy",
file=sys.stderr)
sys.exit(1)
# 1. Load guide pixels
guide_img = Image.open(str(guide_path)).convert('RGB').resize((size, size), Image.LANCZOS)
guide_arr = np.array(guide_img, dtype=np.float64) # (H, W, 3)
# 2. Encode guide → phase-space query
raw_g = guide_path.read_bytes()[:4096]
samp_g = _phi_fourier_sample(raw_g)
cos_b_g, q_ph = _phi_fourier_encode(samp_g)
q_freq = sum(abs(c) for c in cos_b_g) / (len(cos_b_g) or 1)
q_cos, q_sin = math.cos(q_ph), math.sin(q_ph)
# Rank all instances; take 256 nearest
k = 256
distances = []
for i in range(n_inst):
s0 = slots[i * n_slots_per]
ph = float(s0[2])
fr = abs(float(s0[3]))
dp = (math.cos(ph) - q_cos) ** 2 + (math.sin(ph) - q_sin) ** 2
distances.append((dp + 0.1 * (fr - q_freq) ** 2, i))
distances.sort()
active = distances[:k]
max_inv = 1.0 / (active[0][0] + 1e-9)
print(f"[dream] Warping guide through {k} activated lattice instances …")
print(f"[dream] Guide phase: {q_ph:.4f} freq: {q_freq:.4f}")
# 3. Build two focused IFFT displacement fields (dx from ch-0, dy from ch-1)
cols_lat = int(math.isqrt(n_inst))
rows_lat = (n_inst + cols_lat - 1) // cols_lat
def make_displacement(ch: int) -> np.ndarray:
"""IFFT of focused lattice spectrum for channel ch → normalised to [-1,+1]."""
spectrum = np.zeros((rows_lat, cols_lat), dtype=complex)
for dist, i in active:
row = i // cols_lat
col = i % cols_lat
s = slots[i * n_slots_per + ch]
amp = math.log1p(float(s[0])) * (1.0 / (dist + 1e-9)) / max_inv
ph = float(s[2])
spectrum[row, col] = complex(amp * math.cos(ph), amp * math.sin(ph))
# Take real part of IFFT (imaginary component cancels for symmetric fields)
spatial = np.real(np.fft.ifft2(np.fft.ifftshift(spectrum)))
lo, hi = spatial.min(), spatial.max()
normed = (spatial - lo) / (hi - lo + 1e-9) * 2.0 - 1.0 # [-1, +1]
# Upscale lattice grid (64×64) to image size
pil = Image.fromarray(((normed + 1.0) * 127.5).astype(np.uint8), mode='L')
pil = pil.resize((size, size), Image.BICUBIC)
return (np.array(pil, dtype=np.float64) - 127.5) / 127.5 # back to [-1,+1]
dx_norm = make_displacement(0) # (size, size) in [-1,+1]
dy_norm = make_displacement(1)
# Scale to ±20% of image dimensions
warp_x = dx_norm * size * 0.20
warp_y = dy_norm * size * 0.20
# 4. Bilinear sample guide through displacement field
h, w = guide_arr.shape[:2]
row_idx = np.arange(h)[:, None] + warp_y # (H, W)
col_idx = np.arange(w)[None, :] + warp_x # (H, W)
row_idx = np.clip(row_idx, 0, h - 1)
col_idx = np.clip(col_idx, 0, w - 1)
r0 = row_idx.astype(np.int32)
c0 = col_idx.astype(np.int32)
r1 = np.clip(r0 + 1, 0, h - 1)
c1 = np.clip(c0 + 1, 0, w - 1)
fr = row_idx - r0 # fractional row
fc = col_idx - c0 # fractional col
out = np.zeros((h, w, 3), dtype=np.float64)
for ch in range(3):
g = guide_arr[:, :, ch]
out[:, :, ch] = (
g[r0, c0] * (1 - fr) * (1 - fc) +
g[r0, c1] * (1 - fr) * fc +
g[r1, c0] * fr * (1 - fc) +
g[r1, c1] * fr * fc
)
canvas = Image.fromarray(np.clip(out, 0, 255).astype(np.uint8), mode='RGB')
arr = np.array(canvas)
return arr[:, :, 0], arr[:, :, 1], arr[:, :, 2], canvas
def render_terminal(R, G, B, size: int):
"""Render dream as ANSI true-color block art in the terminal."""
import shutil
tw, th = shutil.get_terminal_size((80, 40))
tw = min(tw, 160)
th = min(th - 3, 48)
scale_x = size / tw
scale_y = size / th
try:
import numpy as np
def sample(arr, ty, tx):
sy = min(int(ty * scale_y), size - 1)
sx = min(int(tx * scale_x), size - 1)
return int(arr[sy, sx])
except ImportError:
def sample(arr, ty, tx):
sy = min(int(ty * scale_y), size - 1)
sx = min(int(tx * scale_x), size - 1)
return int(arr[sy][sx])
print()
for ty in range(th):
row = ""
for tx in range(tw):
r = sample(R, ty, tx)
g = sample(G, ty, tx)
b = sample(B, ty, tx)
# Full block with ANSI true-color background
row += f"\x1b[48;2;{r};{g};{b}m\x1b[38;2;{r};{g};{b}m\u2588"
row += "\x1b[0m"
print(row)
print()
# ── Main ──────────────────────────────────────────────────────────────────────
def main():
lattice_path = LATTICE_BIN
out_path = OUT_DEFAULT
size = 256
t_offset = 0.0
terminal = False
animate = 0
mode = "field"
query_path = None
guide_path = None
n_neighbors = 256 # top 6.25% of 4096 — broad enough to be coherent
tar_path = None
n_mosaic = 9 # 3×3 grid for retrieve mode
i = 1
while i < len(sys.argv):
a = sys.argv[i]
if a == '--lattice' and i + 1 < len(sys.argv): lattice_path = Path(sys.argv[i+1]); i += 2
elif a == '--out' and i + 1 < len(sys.argv): out_path = Path(sys.argv[i+1]); i += 2
elif a == '--size' and i + 1 < len(sys.argv): size = int(sys.argv[i+1]); i += 2
elif a == '--t' and i + 1 < len(sys.argv): t_offset = float(sys.argv[i+1]); i += 2
elif a == '--animate' and i + 1 < len(sys.argv): animate = int(sys.argv[i+1]); i += 2
elif a == '--mode' and i + 1 < len(sys.argv): mode = sys.argv[i+1]; i += 2
elif a == '--query' and i + 1 < len(sys.argv): query_path = Path(sys.argv[i+1]); i += 2
elif a == '--guide' and i + 1 < len(sys.argv): guide_path = Path(sys.argv[i+1]); i += 2
elif a == '--neighbors' and i + 1 < len(sys.argv): n_neighbors = int(sys.argv[i+1]); i += 2
elif a == '--tar' and i + 1 < len(sys.argv): tar_path = Path(sys.argv[i+1]); i += 2
elif a == '--mosaic' and i + 1 < len(sys.argv): n_mosaic = int(sys.argv[i+1]); i += 2
elif a == '--terminal': terminal = True; i += 1
elif a in ('--help', '-h'): print(__doc__); sys.exit(0)
else: print(f"[dream] Unknown arg: {a}", file=sys.stderr); i += 1
VALID_MODES = ('field', 'patches', 'attractor', 'sorted', 'reconstruct', 'spectral',
'retrieve', 'recall', 'generate')
if mode not in VALID_MODES:
print(f"[dream] Unknown --mode {mode!r}. Choose from: {', '.join(VALID_MODES)}",
file=sys.stderr)
sys.exit(1)
if not lattice_path.exists():
print(f"[dream] Lattice not found: {lattice_path}", file=sys.stderr)
print(f" Run hdgl_corpus_seeder.exe first to generate it.", file=sys.stderr)
sys.exit(1)
print(f"[dream] Loading : {lattice_path.name}")
n_inst, n_slots_per, total, lat_t, omega, pvar, slots = load_lattice(lattice_path)
print(f"[dream] Lattice : {n_inst} instances x {n_slots_per} slots = {total} total")
print(f" State : t={lat_t:.3f} omega={omega:.4f} pvar={pvar:.4f}")
print(f"[dream] Mode : {mode}")
# ── Guided dream: find K-nearest instances and mask the rest ──────────────
if guide_path is not None:
if not guide_path.exists():
print(f"[dream] --guide file not found: {guide_path}", file=sys.stderr)
sys.exit(1)
print(f"[dream] Guide : {guide_path.name} (neighbors={n_neighbors})")
mask = compute_guided_mask(slots, n_inst, n_slots_per, guide_path, n_neighbors)
slots = apply_guided_mask(slots, n_inst, n_slots_per, mask)
guide_stem = guide_path.stem[:20].replace(' ', '_')
print(f" Focusing on {n_neighbors}/{n_inst} nearest instances "
f"({100*n_neighbors/n_inst:.1f}% of lattice)")
# Auto-rename output so it doesn't overwrite the unguided version
if out_path == OUT_DEFAULT:
out_path = REPO_ROOT / "pipeline" / f"dream_{mode}_guided_{guide_stem}.ppm"
# Resolve query instance
highlight_inst = -1
if query_path is not None:
if not query_path.exists():
print(f"[dream] --query file not found: {query_path}", file=sys.stderr)
sys.exit(1)
highlight_inst = find_query_instance(query_path, total, n_slots_per)
hi = n_inst // int(math.isqrt(n_inst)) # cols
print(f"[dream] Query : {query_path.name}")
print(f" → instance {highlight_inst} "
f"(row {highlight_inst // hi}, col {highlight_inst % hi}) "
f"[yellow border in output]")
# ── Render dispatch ────────────────────────────────────────────────────────
def _render():
if mode == 'field':
return render_dream(slots, n_inst, size, t_offset)
elif mode == 'patches':
return render_patches(slots, n_inst, n_slots_per, size, t_offset,
highlight_inst=highlight_inst)
elif mode == 'attractor':
return render_attractor(slots, n_inst, n_slots_per, size,
highlight_inst=highlight_inst)
elif mode == 'sorted':
return render_sorted(slots, n_inst, n_slots_per, size, t_offset,
highlight_inst=highlight_inst)
elif mode == 'reconstruct':
return render_reconstruct(slots, n_inst, n_slots_per, size)
elif mode == 'spectral':
return render_spectral(slots, n_inst, n_slots_per, size)
elif mode == 'retrieve':
raise RuntimeError("retrieve mode handled separately") # sentinel
# ── Retrieve / Recall / Generate: real-image modes ──────────────────────
if mode in ('retrieve', 'recall', 'generate'):
if guide_path is None:
print(f"[dream] --mode {mode} requires --guide <image>", file=sys.stderr)
sys.exit(1)
if tar_path is None:
try:
tar_path = _find_tar()
except FileNotFoundError as e:
print(f"[dream] {e}", file=sys.stderr)
sys.exit(1)
print(f"[dream] Tar : {tar_path.name}")
guide_stem = guide_path.stem[:20].replace(' ', '_')
if mode == 'retrieve':
retr_out = REPO_ROOT / "pipeline" / f"dream_retrieve_{guide_stem}.png"
R, G, B, canvas = render_retrieve(
guide_path, tar_path, n_mosaic, size,
slots, n_inst, n_slots_per, total
)
canvas.save(str(retr_out))
print(f"[dream] Written : {retr_out}")
elif mode == 'recall':
retr_out = REPO_ROOT / "pipeline" / f"dream_recall_{guide_stem}.png"
R, G, B, canvas = render_recall(
guide_path, tar_path, size,
slots, n_inst, n_slots_per, total
)
canvas.save(str(retr_out))
print(f"[dream] Written : {retr_out}")
print(f" Gold border = guide image (top-left)")
print(f" Cyan borders = exact slot-collision partners (lattice recall)")
elif mode == 'generate':
retr_out = REPO_ROOT / "pipeline" / f"dream_generate_{guide_stem}.png"
R, G, B, canvas = render_generate(
guide_path, tar_path, n_mosaic, size,
slots, n_inst, n_slots_per, total
)
canvas.save(str(retr_out))
print(f"[dream] Written : {retr_out}")
print(f" Guide pixels warped by the lattice's spatial memory field.")
print(f" 256 activated instances form a displacement map (dx, dy).")
print(f" Every pixel migrated — recognisable but unique, never before seen.")
return
# ── Derive output filename stem for this mode / query ─────────────────────
if out_path == OUT_DEFAULT:
stem = f"dream_{mode}"
if query_path is not None:
stem += f"_q{highlight_inst}"
out_path = REPO_ROOT / "pipeline" / f"{stem}.ppm"
# ── Terminal ───────────────────────────────────────────────────────────────
if terminal:
print(f"[dream] Rendering {size}x{size} → terminal t={t_offset:.3f}")
R, G, B = _render()
render_terminal(R, G, B, size)
return
# ── Animation ─────────────────────────────────────────────────────────────
if animate > 0:
stem = out_path.stem
suffix = out_path.suffix or ".ppm"
dt = TAU / animate
print(f"[dream] Animating {animate} frames at {size}x{size}")
for frame in range(animate):
t = t_offset + frame * dt
if mode == 'patches':
R, G, B = render_patches(slots, n_inst, n_slots_per, size, t,
highlight_inst=highlight_inst)
elif mode == 'sorted':
R, G, B = render_sorted(slots, n_inst, n_slots_per, size, t,
highlight_inst=highlight_inst)
else:
R, G, B = render_dream(slots, n_inst, size, t)
fp = out_path.parent / f"{stem}_frame_{frame:04d}{suffix}"
write_ppm(fp, R, G, B, size)
sys.stdout.write(f"\r[dream] Frame {frame+1:3d}/{animate} t={t:.3f}")
sys.stdout.flush()
print(f"\n[dream] Done — {animate} frames in {out_path.parent}/")
return
# ── Single frame ──────────────────────────────────────────────────────────
print(f"[dream] Rendering {size}x{size} t={t_offset:.3f} ...")
R, G, B = _render()
write_ppm(out_path, R, G, B, size)
png_written = try_write_png(out_path, R, G, B, size)
view_path = out_path.with_suffix('.png') if png_written else out_path
print(f"[dream] Written : {view_path}")
if mode != 'field':
cols = int(math.isqrt(n_inst))
patch = max(2, size // cols)
n_seeded = sum(
1 for i in range(n_inst)
if any(slots[i * n_slots_per + ch][0] > 0 for ch in range(min(n_slots_per, 4)))
)
fill_pct = 100.0 * n_seeded / n_inst
print(f" Instances seeded : {n_seeded}/{n_inst} ({fill_pct:.1f}%)")
print(f" Patch size : {patch}×{patch} px (grid {cols}×{cols})")
if highlight_inst >= 0:
ri = highlight_inst // cols
ci = highlight_inst % cols
s0 = slots[highlight_inst * n_slots_per]
print(f" Query instance : [{ri},{ci}] "
f"phase={s0[2]:.3f} freq={s0[3]:.3f} "
f"charge={s0[0]}")
print()
print(f" Pictorial outputs (start here):")
print(f" python pipeline/dream_image.py --mode reconstruct --size 512")
print(f" python pipeline/dream_image.py --mode spectral --size 512")
print()
print(f" Topology maps:")
print(f" python pipeline/dream_image.py --mode patches --size 512")
print(f" python pipeline/dream_image.py --mode attractor --size 512")
print(f" python pipeline/dream_image.py --mode sorted --size 512")
print()
print(f" Find where a specific image lives:")
print(f" python pipeline/dream_image.py --mode patches --query <image.jpg>")
if __name__ == "__main__":
main()
Outputs:
From here, we attempt pixel mapping.
Guess bot thought I might be able to understand what this means in tunable terms. I cannot, but this is interesting..
#!/usr/bin/env python3
"""
dream_image.py — Renders the HDGL lattice's current "dream state" as a 2D image.
The lattice has absorbed 49,341 text tokens and 8,696 forum images via phi-Fourier
encoding. Each of the 16,384 slots carries (phase, freq) bias vectors — the
spectral fingerprint of every piece of content that ever touched that slot.
Six rendering modes:
reconstruct — *** MOST IMAGE-LIKE *** Each of the 4096 content addresses becomes
one pixel (64×64), coloured by cos(phase) weighted by log-charge.
Bicubic-upscaled + Gaussian-blurred to size×size. The result is a
soft, painterly portrait of the mean spectral character of every image
family the lattice absorbed. Run this first.
spectral — Inverse-FFT reconstruction. Treats the 64×64 phase grid as a 2D
Fourier spectrum (amplitude = charge, angle = phase) and does ifft2.
Returns a spatial image whose frequency energy matches the corpus.
Produces wetter, more radial, painterly output than reconstruct.
Often looks like an X-ray or long-exposure photo of the forum.
field — All 16,384 slots summed simultaneously into one interference field.
The Chladni/brainwave portrait of everything at once (original mode).
patches — 64×64 mosaic of Gabor fingerprint patches, one per content address.
Each patch shows ONLY that instance's 4 slots. Topology map of absorbed
content. Bright = heavily seeded. Similar stripes = similar images.
attractor — Charge density heat map. One pixel per instance. Colour = per-slot
charge in R/G/B. Shows WHERE content concentrated.
sorted — Patches re-arranged so spectrally similar attractors sit next to each other.
--query PATH — Feed any raw file through the same phi-Fourier → djb2 encoder.
Highlights which patch in the mosaic that content maps to.
Usage:
python pipeline/dream_image.py --mode reconstruct --size 512 # START HERE
python pipeline/dream_image.py --mode spectral --size 512 # IFFT portrait
# Guided dream — steer the reconstruction toward a specific image:
python pipeline/dream_image.py --mode reconstruct --guide myimage.jpg
python pipeline/dream_image.py --mode spectral --guide myimage.jpg
python pipeline/dream_image.py --mode reconstruct --guide myimage.jpg --neighbors 128
python pipeline/dream_image.py # field mode, 256×256
python pipeline/dream_image.py --mode patches --size 512
python pipeline/dream_image.py --mode attractor --size 512
python pipeline/dream_image.py --mode sorted --size 512
python pipeline/dream_image.py --mode patches --query myimage.jpg
python pipeline/dream_image.py --terminal
python pipeline/dream_image.py --animate 60
python pipeline/dream_image.py --t 3.14
python pipeline/dream_image.py --lattice metal_infer/hdgl_lattice_corpus.bin
Licensed per https://zchg.org/t/legal-notice-copyright-applicable-ip-and-licensing-read-me/440
"""
import math
import struct
import sys
from pathlib import Path
PHI = 1.6180339887498948
TAU = 2.0 * math.pi
REPO_ROOT = Path(__file__).resolve().parent.parent
LATTICE_BIN = REPO_ROOT / "metal_infer" / "hdgl_lattice_corpus.bin"
OUT_DEFAULT = REPO_ROOT / "pipeline" / "dream_latest.ppm"
# Binary format — mirrors image_seeder.py and hdgl_bootloaderz.c
HEADER_FMT = '<4sIIIddd'
HEADER_SIZE = struct.calcsize(HEADER_FMT) # 40 bytes
SLOT_FMT = '<QqddII'
SLOT_SIZE = struct.calcsize(SLOT_FMT) # 40 bytes
# ── Lattice loader ────────────────────────────────────────────────────────────
def load_lattice(path: Path):
raw = path.read_bytes()
if len(raw) < HEADER_SIZE:
raise ValueError(f"File too small: {len(raw)} bytes")
sig, ver, n_inst, n_slots_per, lat_t, omega, pvar = struct.unpack_from(HEADER_FMT, raw, 0)
if sig != b'HDGL':
raise ValueError(f"Bad magic: {sig!r}")
total = n_inst * n_slots_per
expected = HEADER_SIZE + total * SLOT_SIZE
if len(raw) != expected:
raise ValueError(f"Size mismatch: got {len(raw)}, expected {expected}")
slots = []
off = HEADER_SIZE
for _ in range(total):
slots.append(list(struct.unpack_from(SLOT_FMT, raw, off)))
off += SLOT_SIZE
return n_inst, n_slots_per, total, lat_t, omega, pvar, slots
# ── Dream renderer (numpy path) ───────────────────────────────────────────────
def render_dream_numpy(slots, n_inst: int, size: int, t_offset: float):
import numpy as np
# Pixel coordinate grids: x ∈ [-1,1], y ∈ [-1,1]
x = np.linspace(-1.0, 1.0, size, dtype=np.float32)
y = np.linspace(-1.0, 1.0, size, dtype=np.float32)
xx, yy = np.meshgrid(x, y) # (size, size)
# Accumulate R, G, B interference fields + luminance modulation
fields = [np.zeros((size, size), dtype=np.float64) for _ in range(3)]
lum = np.zeros((size, size), dtype=np.float64)
for inst in range(n_inst):
for ch in range(4):
s = slots[inst * 4 + ch]
# Slot fields: [mantissa_word0, exponent, phase, freq, state_flags, strand_idx]
phase_v = s[2] # float64 — accumulated phase bias
freq_v = s[3] # float64 — accumulated freq bias
strand = int(s[5]) # Spiral8 strand index
# Spatial frequency: fold into visible range (0.5 – ~4 cycles per field).
# Use phi-modular folding so frequency distribution stays interesting.
sf = 0.5 + abs(freq_v) % (3.0 * PHI)
# Wave direction: phase_v determines orientation.
# Lattice time sweeps both axes, but the phi-stretched y-axis ensures
# R, G, B channels drift apart visibly rather than evolving in lockstep.
t_strand = t_offset * (1.0 + strand * PHI * 0.01)
angle = phase_v + t_strand
dx = math.cos(angle)
dy = math.sin(angle * PHI)
# Project pixel coordinates onto wave direction
proj = xx * dx + yy * dy
# Amplitude: sigmoid of freq so heavily-seeded slots shine brighter.
# Slots that absorbed many images/tokens have large freq_v → amp → 1.
amp = math.tanh(abs(freq_v) * 0.4 + 0.05)
wave = amp * np.cos(TAU * sf * proj)
if ch < 3:
fields[ch] += wave
else:
lum += wave
# Luminance modulation: normalize lum to [0, 0.4] and boost brightness of
# high-energy regions (where many slots with similar phase converged).
lum_norm = (lum - lum.min()) / max(lum.max() - lum.min(), 1e-9) * 0.4
out_channels = []
for f in fields:
f = f * (1.0 + lum_norm)
lo, hi = f.min(), f.max()
if hi - lo < 1e-9:
out_channels.append(np.full((size, size), 128, dtype=np.uint8))
else:
out_channels.append(((f - lo) / (hi - lo) * 255.0).astype(np.uint8))
return out_channels[0], out_channels[1], out_channels[2]
# ── Dream renderer (pure Python fallback — use small size) ───────────────────
def render_dream_pure(slots, n_inst: int, size: int, t_offset: float):
fields = [[[0.0] * size for _ in range(size)] for _ in range(3)]
lum_f = [[0.0] * size for _ in range(size)]
xs = [2.0 * c / max(size - 1, 1) - 1.0 for c in range(size)]
ys = [2.0 * r / max(size - 1, 1) - 1.0 for r in range(size)]
for inst in range(n_inst):
for ch in range(4):
s = slots[inst * 4 + ch]
phase_v = s[2]
freq_v = s[3]
strand = int(s[5])
sf = 0.5 + abs(freq_v) % (3.0 * PHI)
t_strand = t_offset * (1.0 + strand * PHI * 0.01)
angle = phase_v + t_strand
dx = math.cos(angle)
dy = math.sin(angle * PHI)
amp = math.tanh(abs(freq_v) * 0.4 + 0.05)
for r in range(size):
for c in range(size):
proj = xs[c] * dx + ys[r] * dy
w = amp * math.cos(TAU * sf * proj)
if ch < 3:
fields[ch][r][c] += w
else:
lum_f[r][c] += w
# Normalize
lum_lo = min(v for row in lum_f for v in row)
lum_hi = max(v for row in lum_f for v in row)
lum_rng = max(lum_hi - lum_lo, 1e-9)
out_channels = []
for f in fields:
flat = [v for row in f for v in row]
lo, hi = min(flat), max(flat)
rng = max(hi - lo, 1e-9)
ch_out = []
for r in range(size):
row = []
lum_mod = 1.0 + (lum_f[r][0] - lum_lo) / lum_rng * 0.4
for c in range(size):
v = f[r][c] * (1.0 + (lum_f[r][c] - lum_lo) / lum_rng * 0.4)
row.append(max(0, min(255, int((v - lo) / rng * 255))))
ch_out.append(row)
out_channels.append(ch_out)
return out_channels[0], out_channels[1], out_channels[2]
def render_dream(slots, n_inst: int, size: int, t_offset: float):
try:
import numpy # noqa: F401
return render_dream_numpy(slots, n_inst, size, t_offset)
except ImportError:
size = min(size, 64)
print(f"[dream] numpy not available — pure Python fallback at {size}x{size}",
file=sys.stderr)
return render_dream_pure(slots, n_inst, size, t_offset)
# ── Output writers ────────────────────────────────────────────────────────────
def write_ppm(path: Path, R, G, B, size: int):
try:
import numpy as np
pixels = np.stack([R, G, B], axis=-1)
with open(path, 'wb') as f:
f.write(f"P6\n{size} {size}\n255\n".encode())
f.write(pixels.tobytes())
except ImportError:
with open(path, 'wb') as f:
f.write(f"P6\n{size} {size}\n255\n".encode())
for row in range(size):
for col in range(size):
f.write(bytes([R[row][col], G[row][col], B[row][col]]))
def try_write_png(path: Path, R, G, B, size: int) -> bool:
"""Write PNG if Pillow is available. Returns True on success."""
try:
import numpy as np
from PIL import Image
img = Image.fromarray(np.stack([R, G, B], axis=-1), mode='RGB')
png = path.with_suffix('.png')
img.save(png)
print(f"[dream] PNG: {png}")
return True
except ImportError:
return False
# ── phi-Fourier + djb2 (mirrors image_seeder.py exactly) ────────────────────
FOURIER_N = 12
SAMPLES_N = 256
def _djb2(data: bytes) -> int:
h = 5381
for b in data:
h = ((h * 33) + b) & 0xFFFFFFFF
return h
def _phi_fourier_sample(raw: bytes) -> list:
n = len(raw)
if n == 0:
return [0.0] * SAMPLES_N
stride = n / SAMPLES_N
return [raw[int(i * stride)] / 255.0 for i in range(SAMPLES_N)]
def _phi_fourier_encode(samples: list):
"""
Mirrors image_seeder.py phi_fourier_encode exactly.
Returns (cos_b[FOURIER_N], temporal_phase).
"""
n = len(samples)
cos_b = []
for k in range(FOURIER_N):
freq = (k + 0.1) * PHI
sc = ss = 0.0
for i, s in enumerate(samples):
angle = TAU * freq * i / n
sc += s * math.cos(angle)
ss += s * math.sin(angle)
cos_b.append(sc / n)
temporal_phase = math.fmod(cos_b[0] * math.pi * PHI, TAU)
return cos_b, temporal_phase
def compute_guided_mask(slots, n_inst: int, n_slots_per: int,
guide_path: Path, k: int) -> set:
"""
Encode `guide_path` identically to image_seeder.py, extract its
(phase, freq) signature, rank all 4096 instances by phase-space distance,
and return the indices of the K nearest ones.
Distance metric:
d = sqrt( (cos(ph_i)-cos(q_ph))^2 + (sin(ph_i)-sin(q_ph))^2
+ 0.1*(fr_i - q_fr)^2 )
Using cos/sin difference instead of raw angle difference handles wrap-around.
freq weight 0.1 keeps phase the primary signal (freq accumulates nonlinearly).
"""
raw = guide_path.read_bytes()[:4096]
samples = _phi_fourier_sample(raw)
cos_b, q_phase = _phi_fourier_encode(samples)
# Derive a scalar freq proxy from the harmonics — same energy as image_seeder
q_freq = sum(abs(c) for c in cos_b) / (len(cos_b) or 1)
q_cos = math.cos(q_phase)
q_sin = math.sin(q_phase)
distances = []
for i in range(n_inst):
s0 = slots[i * n_slots_per] # slot 0 = primary phase/freq
ph = float(s0[2])
fr = abs(float(s0[3]))
dp = (math.cos(ph) - q_cos) ** 2 + (math.sin(ph) - q_sin) ** 2
df = (fr - q_freq) ** 2
distances.append((dp + 0.1 * df, i))
distances.sort()
return set(i for _, i in distances[:k])
def apply_guided_mask(slots, n_inst: int, n_slots_per: int, mask: set) -> list:
"""
Return a copy of slots where every instance NOT in `mask` has its
charge zeroed (so it contributes nothing to the reconstruction).
Phase/freq are preserved so the spectral geometry is intact.
"""
out = [list(s) for s in slots]
for i in range(n_inst):
if i not in mask:
for ch in range(n_slots_per):
out[i * n_slots_per + ch][0] = 0 # charge = 0
return out
def find_query_instance(path: Path, total_slots: int, n_slots_per: int) -> int:
"""
Load any file, phi-Fourier encode the first 4096 raw bytes, run djb2 on the
first 64 quantised samples — exactly what image_seeder.py does. Returns the
instance index (0..n_inst-1) that this file maps to.
"""
raw = path.read_bytes()[:4096]
samples = _phi_fourier_sample(raw)
key_bytes = bytes(int(s * 255) & 0xFF for s in samples[:64])
slot_idx = _djb2(key_bytes) % total_slots
return slot_idx // n_slots_per # which instance
# ── Mode: patches — fingerprint mosaic ───────────────────────────────────────
def render_patches(slots, n_inst: int, n_slots_per: int, size: int, t_offset: float,
highlight_inst: int = -1, order=None):
"""
64×64 mosaic (for 4096 instances). Each cell = Gabor fingerprint of one instance.
Bright patches = many images/tokens converged (high charge).
Dark patches = sparse / unabsorbed content address.
Similar stripes = spectrally related content.
Highlighted cell (yellow border) = the attractor for --query file.
`order` — optional list of instance indices to remap position→instance.
Used by render_sorted to spatially arrange by similarity.
"""
import numpy as np
cols = int(math.isqrt(n_inst)) # 64 for 4096 instances
rows = (n_inst + cols - 1) // cols
patch = max(2, size // cols)
H = rows * patch
W = cols * patch
R = np.zeros((H, W), dtype=np.float64)
G = np.zeros((H, W), dtype=np.float64)
B = np.zeros((H, W), dtype=np.float64)
# Global charge range for amplitude scaling
charges = np.array([
sum(float(slots[i * n_slots_per + ch][0]) for ch in range(min(n_slots_per, 4)))
for i in range(n_inst)
], dtype=np.float64)
max_charge = float(charges.max()) or 1.0
# Local pixel coordinate grid for each patch
px = np.linspace(-1.0, 1.0, patch, dtype=np.float32)
gx, gy = np.meshgrid(px, px) # (patch, patch)
if order is None:
order = list(range(n_inst))
for pos, inst in enumerate(order):
ri = pos // cols
ci = pos % cols
if ri >= rows:
break
y0 = ri * patch
x0 = ci * patch
charge_norm = math.log1p(charges[inst]) / math.log1p(max_charge) # log-scale
for ch_idx, field in enumerate([R, G, B]):
if ch_idx >= n_slots_per:
continue
s = slots[inst * n_slots_per + ch_idx]
phase_v = float(s[2])
freq_v = float(s[3])
strand = int(s[5])
sf = 0.5 + abs(freq_v) % (3.0 * PHI)
t_s = t_offset * (1.0 + strand * PHI * 0.01)
angle = phase_v + t_s
# Gabor: oriented stripe at `angle`, asymmetric axes for colour separation
proj = gx * math.cos(angle) + gy * math.sin(angle * PHI)
amp = math.tanh(abs(freq_v) * 0.4 + 0.05) * (0.15 + 0.85 * charge_norm)
field[y0:y0 + patch, x0:x0 + patch] += amp * np.cos(TAU * sf * proj)
# Highlight the query instance with a yellow border (2-pixel ring)
if 0 <= highlight_inst < n_inst:
pos = order.index(highlight_inst) if highlight_inst in order else -1
if pos >= 0:
ri = pos // cols
ci = pos % cols
y0 = ri * patch
x0 = ci * patch
bw = max(1, patch // 6) # border width scales with patch size
# Yellow = full R, full G, no B
R[y0:y0 + patch, x0:x0 + bw] = R.max()
R[y0:y0 + patch, x0 + patch - bw:x0 + patch] = R.max()
R[y0:y0 + bw, x0:x0 + patch] = R.max()
R[y0 + patch - bw:y0 + patch, x0:x0 + patch] = R.max()
G[y0:y0 + patch, x0:x0 + bw] = G.max()
G[y0:y0 + patch, x0 + patch - bw:x0 + patch] = G.max()
G[y0:y0 + bw, x0:x0 + patch] = G.max()
G[y0 + patch - bw:y0 + patch, x0:x0 + patch] = G.max()
B[y0:y0 + patch, x0:x0 + bw] = B.min()
B[y0:y0 + patch, x0 + patch - bw:x0 + patch] = B.min()
B[y0:y0 + bw, x0:x0 + patch] = B.min()
B[y0 + patch - bw:y0 + patch, x0:x0 + patch] = B.min()
out = []
for f in [R, G, B]:
lo, hi = f.min(), f.max()
if hi - lo < 1e-9:
out.append(np.full((H, W), 24, dtype=np.uint8))
else:
out.append(((f - lo) / (hi - lo) * 255).astype(np.uint8))
return out[0], out[1], out[2]
# ── Mode: attractor — charge density heat map ────────────────────────────────
def render_attractor(slots, n_inst: int, n_slots_per: int, size: int,
highlight_inst: int = -1):
"""
One pixel per instance. Colour = per-channel log-charge. Upscaled to size×size.
Bright spots = heavily seeded content addresses. Dark = sparse.
"""
import numpy as np
cols = int(math.isqrt(n_inst))
rows = (n_inst + cols - 1) // cols
patch = max(1, size // cols)
chs = []
for ch in range(3):
c = np.array([
math.log1p(float(slots[i * n_slots_per + ch][0])) if ch < n_slots_per else 0.0
for i in range(n_inst)
], dtype=np.float64).reshape(rows, cols)
lo, hi = c.min(), c.max()
if hi > lo:
c = ((c - lo) / (hi - lo) * 255).astype(np.uint8)
else:
c = np.zeros((rows, cols), dtype=np.uint8)
# Upscale by patch via nearest-neighbour (kron)
chs.append(np.kron(c, np.ones((patch, patch), dtype=np.uint8)))
# Yellow highlight border
if 0 <= highlight_inst < n_inst:
ri = highlight_inst // cols
ci = highlight_inst % cols
y0, x0 = ri * patch, ci * patch
bw = max(1, patch // 4)
for f in [chs[0], chs[1]]: # R+G = yellow
f[y0:y0+patch, x0:x0+bw] = 255
f[y0:y0+patch, x0+patch-bw:x0+patch] = 255
f[y0:y0+bw, x0:x0+patch] = 255
f[y0+patch-bw:y0+patch, x0:x0+patch] = 255
chs[2][y0:y0+patch, x0:x0+bw] = 0
chs[2][y0:y0+patch, x0+patch-bw:x0+patch] = 0
chs[2][y0:y0+bw, x0:x0+patch] = 0
chs[2][y0+patch-bw:y0+patch, x0:x0+patch] = 0
return chs[0], chs[1], chs[2]
# ── Mode: sorted — spectral sort mosaic ──────────────────────────────────────
def compute_sorted_order(slots, n_inst: int, n_slots_per: int) -> list:
"""
Embed each instance as (x, y) = (cos(phase0)*|freq0|, sin(phase0)*|freq0|).
Sort by x then y so spectrally similar attractors sit next to each other.
Returns the permuted instance list (position → instance).
"""
def embed(i):
s = slots[i * n_slots_per]
ph = float(s[2])
fr = abs(float(s[3]))
return (math.cos(ph) * fr, math.sin(ph) * fr)
return sorted(range(n_inst), key=embed)
def render_sorted(slots, n_inst: int, n_slots_per: int, size: int, t_offset: float,
highlight_inst: int = -1):
"""Patches mode with spectral-sort ordering so similar attractors are adjacent."""
order = compute_sorted_order(slots, n_inst, n_slots_per)
return render_patches(slots, n_inst, n_slots_per, size, t_offset,
highlight_inst=highlight_inst, order=order)
# ── Mode: reconstruct — phase-map bicubic upscale ───────────────────────────
def render_reconstruct(slots, n_inst: int, n_slots_per: int, size: int):
"""
Most image-like mode. Each content address = one pixel (64×64 grid).
Pixel colour:
cos(phase) — correlates with mean luminance of images that hashed here,
because temporal_phase was derived from cos_b[0] of the raw
byte phi-Fourier transform.
weighted by log(charge+1) so heavily-seeded content addresses are bright.
Output is bicubic-upscaled and Gaussian-blurred to size×size.
Looks like a soft watercolour portrait of the corpus image content.
"""
import numpy as np
cols = int(math.isqrt(n_inst)) # 64
rows = (n_inst + cols - 1) // cols # 64
channels = []
for ch in range(min(n_slots_per, 3)):
phase = np.array(
[math.cos(float(slots[i * n_slots_per + ch][2])) for i in range(n_inst)],
dtype=np.float64
).reshape(rows, cols)
charge = np.array(
[math.log1p(float(slots[i * n_slots_per + ch][0])) for i in range(n_inst)],
dtype=np.float64
).reshape(rows, cols)
charge /= (charge.max() or 1.0)
# Blend: structural phase signal + charge brightness
combined = phase * (0.4 + 0.6 * charge)
lo, hi = combined.min(), combined.max()
normed = (combined - lo) / (hi - lo + 1e-9)
channels.append(normed)
try:
from PIL import Image, ImageFilter
blur_r = max(1.0, size / cols * 0.7) # ~5.6 px at 512/64
out = []
for c in channels:
img = Image.fromarray((c * 255).astype(np.uint8), mode='L')
img = img.resize((size, size), Image.BICUBIC)
img = img.filter(ImageFilter.GaussianBlur(radius=blur_r))
out.append(np.array(img))
return out[0], out[1], out[2]
except ImportError:
scale = max(1, size // cols)
out = []
for c in channels:
out.append(np.kron((c * 255).astype(np.uint8),
np.ones((scale, scale), dtype=np.uint8)))
return out[0], out[1], out[2]
# ── Mode: spectral — inverse-FFT portrait ────────────────────────────────────
def render_spectral(slots, n_inst: int, n_slots_per: int, size: int):
"""
Inverse-FFT reconstruction. Treats the accumulated slot state as a 2D
Fourier spectrum (amplitude = log-charge, angle = slot phase) and recovers
the spatial image whose frequency content matches what the lattice absorbed.
ifftshift centres low-frequency energy — smooth/bright regions dominate the
centre of the image, high-frequency structure radiates outward. The result
OFTEN resembles a long-exposure or Fourier-transform portrait of the forum:
radially symmetric gradients, luminous cores, spectral halos.
"""
import numpy as np
cols = int(math.isqrt(n_inst))
rows = (n_inst + cols - 1) // cols
out_channels = []
for ch in range(min(n_slots_per, 3)):
phases = np.array(
[float(slots[i * n_slots_per + ch][2]) for i in range(n_inst)],
dtype=np.float64
).reshape(rows, cols)
charges = np.array(
[math.log1p(float(slots[i * n_slots_per + ch][0])) for i in range(n_inst)],
dtype=np.float64
).reshape(rows, cols)
charges /= (charges.max() or 1.0)
# Complex Fourier spectrum
spectrum = charges * np.exp(1j * phases)
# Inverse FFT — ifftshift centres the low-frequency content
spatial = np.fft.ifft2(np.fft.ifftshift(spectrum))
img = np.abs(spatial)
lo, hi = img.min(), img.max()
normed = (img - lo) / (hi - lo + 1e-9)
out_channels.append(normed)
try:
from PIL import Image, ImageFilter
blur_r = max(0.5, size / cols * 0.35) # lighter blur than reconstruct
result = []
for c in out_channels:
img = Image.fromarray((c * 255).astype(np.uint8), mode='L')
img = img.resize((size, size), Image.BICUBIC)
img = img.filter(ImageFilter.GaussianBlur(radius=blur_r))
result.append(np.array(img))
return result[0], result[1], result[2]
except ImportError:
scale = max(1, size // cols)
result = []
for c in out_channels:
result.append(np.kron((c * 255).astype(np.uint8),
np.ones((scale, scale), dtype=np.uint8)))
return result[0], result[1], result[2]
# ── Mode: retrieve — pull real similar images from the tar ───────────────────
def _find_tar() -> Path:
candidates = list((REPO_ROOT / "Discourse File").glob("*.tar.gz"))
if candidates:
return candidates[0]
raise FileNotFoundError(
f"No tar.gz found in 'Discourse File/'. Pass --tar <path> explicitly."
)
IMAGE_EXTS = {'.png', '.jpg', '.jpeg', '.gif', '.webp'}
def render_retrieve(guide_path: Path, tar_path: Path, n_top: int, size: int,
slots, n_inst: int, n_slots_per: int, total: int):
"""
Uses the lattice as a content-addressable retrieval index.
1. Encode guide_path with the same phi-Fourier → djb2 pipeline as image_seeder.py
2. Walk every image in tar_path, encode each the same way
3. Rank by phase-space distance to the guide signature
4. Decode the top n_top images with Pillow, tile into a sqrt(n_top)×sqrt(n_top) grid
5. Apply a subtle phase-hue tint from the matched lattice slot (the "trippy" layer)
Result: a real mosaic of real forum images that the lattice considers similar
to the guide — recognizable content, gently coloured by the lattice's dream state.
"""
import tarfile
import io
try:
import numpy as np
from PIL import Image, ImageEnhance, ImageFilter
except ImportError:
print("[dream] retrieve mode requires Pillow: pip install Pillow", file=sys.stderr)
sys.exit(1)
# Encode guide
raw_guide = guide_path.read_bytes()[:4096]
samp_guide = _phi_fourier_sample(raw_guide)
cos_b_guide, q_phase = _phi_fourier_encode(samp_guide)
q_freq = sum(abs(c) for c in cos_b_guide) / (len(cos_b_guide) or 1)
q_cos = math.cos(q_phase)
q_sin = math.sin(q_phase)
print(f"[dream] Scoring images in tar — this takes ~30–90 s for 8k images …")
candidates = [] # (distance, tar_member_name, raw_bytes)
with tarfile.open(str(tar_path), 'r:gz') as tf:
for member in tf:
if not member.isfile():
continue
if Path(member.name).suffix.lower() not in IMAGE_EXTS:
continue
try:
raw = tf.extractfile(member).read(4096)
except Exception:
continue
samp = _phi_fourier_sample(raw)
cos_b, ph = _phi_fourier_encode(samp)
fr = sum(abs(c) for c in cos_b) / (len(cos_b) or 1)
dp = (math.cos(ph) - q_cos)**2 + (math.sin(ph) - q_sin)**2
df = (fr - q_freq)**2
dist = dp + 0.1 * df
candidates.append((dist, member.name))
candidates.sort(key=lambda x: x[0])
top_names = [name for _, name in candidates[:n_top]]
print(f"[dream] Top {n_top} matches (closest first):")
for rank, (dist, name) in enumerate(candidates[:n_top], 1):
print(f" {rank:2d}. {Path(name).name} d={dist:.4f}")
# Extract actual image bytes for the top candidates
top_images = {}
for member in tf:
if member.name in top_names and member.isfile():
try:
top_images[member.name] = tf.extractfile(member).read()
except Exception:
pass
# Order top images by score
ordered = [top_images[n] for n in top_names if n in top_images]
# Tile size per cell
cols_g = max(1, int(math.isqrt(n_top)))
rows_g = (n_top + cols_g - 1) // cols_g
cell = size // cols_g
canvas = Image.new('RGB', (cols_g * cell, rows_g * cell), (20, 20, 20))
for idx, img_bytes in enumerate(ordered):
ri = idx // cols_g
ci = idx % cols_g
try:
img = Image.open(io.BytesIO(img_bytes)).convert('RGB')
except Exception:
img = Image.new('RGB', (cell, cell), (40, 40, 40))
# Resize to cell, preserve aspect with crop
img = img.resize((cell, cell), Image.LANCZOS)
# ── Lattice phase tint: colour the cell with the hue of its
# matched slot — the "dream" colouring layer ────────────────
name = top_names[idx] if idx < len(top_names) else ""
raw_b = (ordered[idx])[:4096]
sampb = _phi_fourier_sample(raw_b)
keyb = bytes(int(s * 255) & 0xFF for s in sampb[:64])
si = _djb2(keyb) % total
inst_i = si // n_slots_per
s0 = slots[inst_i * n_slots_per]
phase_i = float(s0[2])
freq_i = abs(float(s0[3]))
charge_i= math.log1p(float(s0[0]))
# Phase maps to hue via HSV; charge brightens saturation
hue = (phase_i % TAU) / TAU # 0..1
sat = min(0.35, 0.05 + freq_i * 0.04) # gentle tint, not a flood
# Apply as a soft colour overlay using Pillow
overlay = Image.new('RGB', (cell, cell),
_hsv_to_rgb_bytes(hue, sat, 0.5))
img = Image.blend(img, overlay, alpha=0.18) # 18% tint
# Subtle sharpening so retrieved images stay readable
img = img.filter(ImageFilter.UnsharpMask(radius=1.2, percent=60, threshold=3))
canvas.paste(img, (ci * cell, ri * cell))
R = np.array(canvas)[:, :, 0]
G = np.array(canvas)[:, :, 1]
B = np.array(canvas)[:, :, 2]
return R, G, B, canvas
def _hsv_to_rgb_bytes(h: float, s: float, v: float) -> tuple:
"""Convert HSV (0..1 each) to (R, G, B) 0..255 tuple."""
if s == 0.0:
c = int(v * 255)
return (c, c, c)
i = int(h * 6.0) % 6
f = h * 6.0 - int(h * 6.0)
p = v * (1.0 - s)
q = v * (1.0 - f * s)
t = v * (1.0 - (1.0 - f) * s)
row = [(v,t,p),(q,v,p),(p,v,t),(p,q,v),(t,p,v),(v,p,q)][i]
return tuple(int(x * 255) for x in row)
# ── Mode: recall — exact-slot images (true lattice recall) ──────────────────
def render_recall(guide_path: Path, tar_path: Path, size: int,
slots, n_inst: int, n_slots_per: int, total: int):
"""
True lattice reconstruction.
The djb2 hash maps every image to one of 4096 content addresses.
Multiple images can map to the SAME address (hash collision).
Those collision partners are the images the lattice "stored together" —
they have the most structurally similar compressed byte distributions.
This mode:
1. Computes the guide's exact slot address (djb2 mod total).
2. Walks the entire tar and finds every OTHER image that hashes to
the same slot — these are true lattice-recall partners.
3. Returns those images as a grid.
If zero collisions: falls back to the 4 nearest-neighbour images.
The guide itself is excluded from the results (self-recall is trivial).
This is the closest thing to "reconstructing the image from the lattice"
that the slot representation actually supports — the address IS the image's
identity in the lattice; the collision set IS its stored family.
"""
import tarfile, io
try:
import numpy as np
from PIL import Image, ImageFilter
except ImportError:
print("[dream] recall mode requires Pillow: pip install Pillow", file=sys.stderr)
sys.exit(1)
# Compute guide's exact slot address
raw_g = guide_path.read_bytes()[:4096]
samp_g = _phi_fourier_sample(raw_g)
key_g = bytes(int(s * 255) & 0xFF for s in samp_g[:64])
target = _djb2(key_g) % total # exact slot index
guide_name = guide_path.name
print(f"[dream] Guide slot address : {target} (instance {target // n_slots_per})")
# Walk tar: bucket images by slot address, also compute distances for fallback
exact = [] # (name, bytes) — same slot as guide, not guide itself
scored = [] # (dist, name) for fallback
cos_b_g, q_ph = _phi_fourier_encode(samp_g)
q_freq = sum(abs(c) for c in cos_b_g) / (len(cos_b_g) or 1)
q_cos, q_sin = math.cos(q_ph), math.sin(q_ph)
raw_store = {} # name → raw bytes (for exact matches)
print(f"[dream] Scanning tar for slot-{target} collision partners …")
with tarfile.open(str(tar_path), 'r:gz') as tf:
for member in tf:
if not member.isfile():
continue
if Path(member.name).suffix.lower() not in IMAGE_EXTS:
continue
try:
raw = tf.extractfile(member).read(4096)
except Exception:
continue
samp = _phi_fourier_sample(raw)
key = bytes(int(s * 255) & 0xFF for s in samp[:64])
addr = _djb2(key) % total
cos_b, ph = _phi_fourier_encode(samp)
fr = sum(abs(c) for c in cos_b) / (len(cos_b) or 1)
dp = (math.cos(ph) - q_cos)**2 + (math.sin(ph) - q_sin)**2
dist = dp + 0.1 * (fr - q_freq)**2
scored.append((dist, member.name))
if addr == target and Path(member.name).name != guide_name:
exact.append(member.name)
raw_store[member.name] = raw # keep header bytes for later
if exact:
print(f"[dream] Exact slot collisions : {len(exact)}")
else:
scored.sort()
fallback_names = [n for _, n in scored[1:9]] # skip self (d≈0)
print(f"[dream] No exact collisions — using {len(fallback_names)} nearest neighbours")
exact = fallback_names
# Second pass: full bytes for exact/fallback names (separate open to reset cursor)
want_names = set(exact)
full_bytes = {}
with tarfile.open(str(tar_path), 'r:gz') as tf2:
for member in tf2:
if member.name in want_names and member.isfile():
try:
full_bytes[member.name] = tf2.extractfile(member).read()
except Exception:
pass
ordered = [full_bytes[n] for n in exact if n in full_bytes]
if not ordered:
print("[dream] No images decoded.", file=sys.stderr)
sys.exit(1)
print(f"[dream] Recall partners ({len(ordered)}):")
for n in exact[:len(ordered)]:
print(f" {Path(n).name}")
cols_g = max(1, int(math.isqrt(len(ordered))))
rows_g = (len(ordered) + cols_g - 1) // cols_g
cell = size // max(cols_g, 1)
# Also include guide itself at top-left so user sees it alongside recalls
all_imgs = [guide_path.read_bytes()] + ordered
cols_g = max(1, int(math.isqrt(len(all_imgs))))
rows_g = (len(all_imgs) + cols_g - 1) // cols_g
cell = size // max(cols_g, 1)
canvas = Image.new('RGB', (cols_g * cell, rows_g * cell), (10, 10, 20))
for idx, img_bytes in enumerate(all_imgs):
ri = idx // cols_g
ci = idx % cols_g
try:
img = Image.open(io.BytesIO(img_bytes)).convert('RGB')
img = img.resize((cell, cell), Image.LANCZOS)
except Exception:
img = Image.new('RGB', (cell, cell), (30, 30, 30))
# Gold border on guide (idx==0), cyan on collision partners
border_col = (255, 215, 0) if idx == 0 else (0, 200, 220)
bw = max(2, cell // 32)
arr = np.array(img)
arr[:bw, :] = border_col
arr[-bw:, :] = border_col
arr[:, :bw] = border_col
arr[:, -bw:] = border_col
canvas.paste(Image.fromarray(arr), (ci * cell, ri * cell))
R = np.array(canvas)[:, :, 0]
G = np.array(canvas)[:, :, 1]
B = np.array(canvas)[:, :, 2]
return R, G, B, canvas
# ── Mode: generate — weighted blend → novel image in cluster style ────────────
def _compute_displacement_fields(guide_path: Path, slots, n_inst: int, n_slots_per: int,
size: int):
"""
Shared displacement field helper used by `generate` and `warpmap`.
Encodes the guide → (q_phase, q_freq), ranks all 4096 instances by
phase-space distance, takes the 256 nearest, and builds two IFFT-based
scalar fields (dx, dy) normalised to [-1, +1] at target resolution.
Returns: dx_norm, dy_norm, warp_x, warp_y, q_ph, q_freq, active
dx/dy_norm : (size, size) float64 in [-1, +1]
warp_x/y : pixel offsets = ±20% of `size`
active : [(dist, instance_idx), …] 256 entries sorted by distance
"""
import numpy as np
from PIL import Image
raw_g = guide_path.read_bytes()[:4096]
samp_g = _phi_fourier_sample(raw_g)
cos_b_g, q_ph = _phi_fourier_encode(samp_g)
q_freq = sum(abs(c) for c in cos_b_g) / (len(cos_b_g) or 1)
q_cos, q_sin = math.cos(q_ph), math.sin(q_ph)
k = 256
distances = []
for i in range(n_inst):
s0 = slots[i * n_slots_per]
ph = float(s0[2])
fr = abs(float(s0[3]))
dp = (math.cos(ph) - q_cos) ** 2 + (math.sin(ph) - q_sin) ** 2
distances.append((dp + 0.1 * (fr - q_freq) ** 2, i))
distances.sort()
active = distances[:k]
max_inv = 1.0 / (active[0][0] + 1e-9)
cols_lat = int(math.isqrt(n_inst))
rows_lat = (n_inst + cols_lat - 1) // cols_lat
def _field(ch: int) -> np.ndarray:
spectrum = np.zeros((rows_lat, cols_lat), dtype=complex)
for dist, i in active:
row = i // cols_lat
col = i % cols_lat
s = slots[i * n_slots_per + ch]
amp = math.log1p(float(s[0])) * (1.0 / (dist + 1e-9)) / max_inv
ph = float(s[2])
spectrum[row, col] = complex(amp * math.cos(ph), amp * math.sin(ph))
spatial = np.real(np.fft.ifft2(np.fft.ifftshift(spectrum)))
lo, hi = spatial.min(), spatial.max()
normed = (spatial - lo) / (hi - lo + 1e-9) * 2.0 - 1.0
pil = Image.fromarray(((normed + 1.0) * 127.5).astype(np.uint8), mode='L')
pil = pil.resize((size, size), Image.BICUBIC)
return (np.array(pil, dtype=np.float64) - 127.5) / 127.5
dx_norm = _field(0)
dy_norm = _field(1)
warp_x = dx_norm * size * 0.20
warp_y = dy_norm * size * 0.20
return dx_norm, dy_norm, warp_x, warp_y, q_ph, q_freq, active
def _bilinear_warp(src: 'np.ndarray', warp_x: 'np.ndarray',
warp_y: 'np.ndarray') -> 'np.ndarray':
"""
Inverse bilinear warp: for each destination pixel (r,c), sample `src`
at (r + warp_y[r,c], c + warp_x[r,c]). `src` can be 2-D or 3-D (H,W,C).
Returns output with same dtype and shape as src.
"""
import numpy as np
h, w = src.shape[:2]
row_idx = np.arange(h)[:, None] + warp_y
col_idx = np.arange(w)[None, :] + warp_x
row_idx = np.clip(row_idx, 0, h - 1)
col_idx = np.clip(col_idx, 0, w - 1)
r0 = row_idx.astype(np.int32)
c0 = col_idx.astype(np.int32)
r1 = np.clip(r0 + 1, 0, h - 1)
c1 = np.clip(c0 + 1, 0, w - 1)
fr = row_idx - r0
fc = col_idx - c0
if src.ndim == 2:
out = (src[r0, c0] * (1-fr)*(1-fc) + src[r0, c1] * (1-fr)*fc +
src[r1, c0] * fr*(1-fc) + src[r1, c1] * fr*fc)
else:
out = np.zeros_like(src, dtype=np.float64)
for ch in range(src.shape[2]):
g = src[:, :, ch].astype(np.float64)
out[:, :, ch] = (g[r0,c0]*(1-fr)*(1-fc) + g[r0,c1]*(1-fr)*fc +
g[r1,c0]*fr*(1-fc) + g[r1,c1]*fr*fc)
return out
def render_generate(guide_path: Path, tar_path: Path, n_top: int, size: int,
slots, n_inst: int, n_slots_per: int, total: int):
"""
Warps the guide image through the lattice's spatial memory field.
No corpus images are read. The tar is not opened.
Use `--mode warpmap` to see exactly where each pixel goes.
"""
try:
import numpy as np
from PIL import Image
except ImportError:
print("[dream] generate requires Pillow + numpy: pip install Pillow numpy",
file=sys.stderr)
sys.exit(1)
guide_img = Image.open(str(guide_path)).convert('RGB').resize((size, size), Image.LANCZOS)
guide_arr = np.array(guide_img, dtype=np.float64)
_, _, warp_x, warp_y, q_ph, q_freq, active = _compute_displacement_fields(
guide_path, slots, n_inst, n_slots_per, size)
print(f"[dream] Warping guide through {len(active)} activated lattice instances …")
print(f"[dream] Guide phase: {q_ph:.4f} freq: {q_freq:.4f}")
out = _bilinear_warp(guide_arr, warp_x, warp_y)
canvas = Image.fromarray(np.clip(out, 0, 255).astype(np.uint8), mode='RGB')
arr = np.array(canvas)
return arr[:, :, 0], arr[:, :, 1], arr[:, :, 2], canvas
# ── Mode: warpmap — pixel-tracking diagnostic ─────────────────────────────────
def render_warpmap(guide_path: Path, size: int,
slots, n_inst: int, n_slots_per: int, total: int):
"""
3-panel pixel-mapping diagnostic — shows exactly where each guide pixel goes.
Left — Guide image (reference anchor).
Middle — Deformed fishnet grid: a regular 32-px mesh warped by the same
displacement field used in `generate`. Straight lines = zero
distortion; curves/bulges = where the lattice pushes pixels.
The guide is ghosted at 30% behind the grid so you can correlate
features to their departure vectors.
Right — Optical-flow false-colour: each pixel is coloured by the DIRECTION
and MAGNITUDE of its displacement.
Hue = direction (red=right, green=up-left, cyan=left, magenta=down)
Value = magnitude (black = pixel not moving, bright = large shift)
Read hue at any landmark in the left panel to find which coloured
zone it falls in on the right → that is the direction it travels.
No corpus images are read. The output is `size × 3*size` wide to fit all
three panels side by side.
"""
try:
import numpy as np
from PIL import Image, ImageDraw
except ImportError:
print("[dream] warpmap requires Pillow + numpy: pip install Pillow numpy",
file=sys.stderr)
sys.exit(1)
_, _, warp_x, warp_y, q_ph, q_freq, active = _compute_displacement_fields(
guide_path, slots, n_inst, n_slots_per, size)
print(f"[dream] Pixel map: {len(active)} activated instances "
f"phase={q_ph:.4f} freq={q_freq:.4f} warp=±20%")
h = w = size
guide_img = Image.open(str(guide_path)).convert('RGB').resize((size, size), Image.LANCZOS)
guide_arr = np.array(guide_img, dtype=np.float64)
# ── Panel 1: guide reference ──────────────────────────────────────────────
panel1 = guide_arr.copy()
# ── Panel 2: warped fishnet grid ghosted over guide ───────────────────────
spacing = max(16, size // 20)
grid_src = np.zeros((h, w), dtype=np.float64)
for r in range(0, h, spacing):
grid_src[max(0, r-1):r+2, :] = 255.0
for c in range(0, w, spacing):
grid_src[:, max(0, c-1):c+2] = 255.0
warped_grid = _bilinear_warp(grid_src, warp_x, warp_y)
# Ghost guide at 30%, white grid lines at 70%
grid_rgb = np.stack([warped_grid]*3, axis=-1)
panel2 = np.clip(guide_arr * 0.30 + grid_rgb * 0.70, 0, 255)
# ── Panel 3: optical-flow false-colour ────────────────────────────────────
magnitude = np.sqrt(warp_x**2 + warp_y**2)
direction = np.arctan2(warp_y, warp_x) # −π .. +π
mag_norm = magnitude / (magnitude.max() + 1e-9) # 0 .. 1
# HSV → RGB (vectorised, S=1 so p=0, q=V*(1-f), t=V*f)
hue = (direction + math.pi) / (2 * math.pi) # 0 .. 1
h6 = hue * 6.0
hi = h6.astype(np.int32) % 6
ff = h6 - np.floor(h6)
V = mag_norm
t_ = V * ff
q_ = V * (1.0 - ff)
z = np.zeros_like(V)
R_f = np.select([hi==0, hi==1, hi==2, hi==3, hi==4, hi==5], [V, q_, z, z, t_, V])
G_f = np.select([hi==0, hi==1, hi==2, hi==3, hi==4, hi==5], [t_, V, V, q_, z, z])
B_f = np.select([hi==0, hi==1, hi==2, hi==3, hi==4, hi==5], [z, z, t_, V, V, q_])
panel3 = np.stack([R_f, G_f, B_f], axis=-1) * 255.0
# ── Combine panels horizontally ───────────────────────────────────────────
combined = np.concatenate([panel1, panel2, panel3], axis=1).astype(np.uint8)
canvas = Image.fromarray(combined, mode='RGB')
# Thin divider lines between panels
draw = ImageDraw.Draw(canvas)
draw.line([(w, 0), (w, h-1)], fill=(100, 100, 100), width=2)
draw.line([(2*w, 0), (2*w, h-1)], fill=(100, 100, 100), width=2)
arr = np.array(canvas)
return arr[:, :, 0], arr[:, :, 1], arr[:, :, 2], canvas
def render_terminal(R, G, B, size: int):
"""Render dream as ANSI true-color block art in the terminal."""
import shutil
tw, th = shutil.get_terminal_size((80, 40))
tw = min(tw, 160)
th = min(th - 3, 48)
scale_x = size / tw
scale_y = size / th
try:
import numpy as np
def sample(arr, ty, tx):
sy = min(int(ty * scale_y), size - 1)
sx = min(int(tx * scale_x), size - 1)
return int(arr[sy, sx])
except ImportError:
def sample(arr, ty, tx):
sy = min(int(ty * scale_y), size - 1)
sx = min(int(tx * scale_x), size - 1)
return int(arr[sy][sx])
print()
for ty in range(th):
row = ""
for tx in range(tw):
r = sample(R, ty, tx)
g = sample(G, ty, tx)
b = sample(B, ty, tx)
# Full block with ANSI true-color background
row += f"\x1b[48;2;{r};{g};{b}m\x1b[38;2;{r};{g};{b}m\u2588"
row += "\x1b[0m"
print(row)
print()
# ── Main ──────────────────────────────────────────────────────────────────────
def main():
lattice_path = LATTICE_BIN
out_path = OUT_DEFAULT
size = 256
t_offset = 0.0
terminal = False
animate = 0
mode = "field"
query_path = None
guide_path = None
n_neighbors = 256 # top 6.25% of 4096 — broad enough to be coherent
tar_path = None
n_mosaic = 9 # 3×3 grid for retrieve mode
i = 1
while i < len(sys.argv):
a = sys.argv[i]
if a == '--lattice' and i + 1 < len(sys.argv): lattice_path = Path(sys.argv[i+1]); i += 2
elif a == '--out' and i + 1 < len(sys.argv): out_path = Path(sys.argv[i+1]); i += 2
elif a == '--size' and i + 1 < len(sys.argv): size = int(sys.argv[i+1]); i += 2
elif a == '--t' and i + 1 < len(sys.argv): t_offset = float(sys.argv[i+1]); i += 2
elif a == '--animate' and i + 1 < len(sys.argv): animate = int(sys.argv[i+1]); i += 2
elif a == '--mode' and i + 1 < len(sys.argv): mode = sys.argv[i+1]; i += 2
elif a == '--query' and i + 1 < len(sys.argv): query_path = Path(sys.argv[i+1]); i += 2
elif a == '--guide' and i + 1 < len(sys.argv): guide_path = Path(sys.argv[i+1]); i += 2
elif a == '--neighbors' and i + 1 < len(sys.argv): n_neighbors = int(sys.argv[i+1]); i += 2
elif a == '--tar' and i + 1 < len(sys.argv): tar_path = Path(sys.argv[i+1]); i += 2
elif a == '--mosaic' and i + 1 < len(sys.argv): n_mosaic = int(sys.argv[i+1]); i += 2
elif a == '--terminal': terminal = True; i += 1
elif a in ('--help', '-h'): print(__doc__); sys.exit(0)
else: print(f"[dream] Unknown arg: {a}", file=sys.stderr); i += 1
VALID_MODES = ('field', 'patches', 'attractor', 'sorted', 'reconstruct', 'spectral',
'retrieve', 'recall', 'generate', 'warpmap')
if mode not in VALID_MODES:
print(f"[dream] Unknown --mode {mode!r}. Choose from: {', '.join(VALID_MODES)}",
file=sys.stderr)
sys.exit(1)
if not lattice_path.exists():
print(f"[dream] Lattice not found: {lattice_path}", file=sys.stderr)
print(f" Run hdgl_corpus_seeder.exe first to generate it.", file=sys.stderr)
sys.exit(1)
print(f"[dream] Loading : {lattice_path.name}")
n_inst, n_slots_per, total, lat_t, omega, pvar, slots = load_lattice(lattice_path)
print(f"[dream] Lattice : {n_inst} instances x {n_slots_per} slots = {total} total")
print(f" State : t={lat_t:.3f} omega={omega:.4f} pvar={pvar:.4f}")
print(f"[dream] Mode : {mode}")
# ── Guided dream: find K-nearest instances and mask the rest ──────────────
if guide_path is not None:
if not guide_path.exists():
print(f"[dream] --guide file not found: {guide_path}", file=sys.stderr)
sys.exit(1)
print(f"[dream] Guide : {guide_path.name} (neighbors={n_neighbors})")
mask = compute_guided_mask(slots, n_inst, n_slots_per, guide_path, n_neighbors)
slots = apply_guided_mask(slots, n_inst, n_slots_per, mask)
guide_stem = guide_path.stem[:20].replace(' ', '_')
print(f" Focusing on {n_neighbors}/{n_inst} nearest instances "
f"({100*n_neighbors/n_inst:.1f}% of lattice)")
# Auto-rename output so it doesn't overwrite the unguided version
if out_path == OUT_DEFAULT:
out_path = REPO_ROOT / "pipeline" / f"dream_{mode}_guided_{guide_stem}.ppm"
# Resolve query instance
highlight_inst = -1
if query_path is not None:
if not query_path.exists():
print(f"[dream] --query file not found: {query_path}", file=sys.stderr)
sys.exit(1)
highlight_inst = find_query_instance(query_path, total, n_slots_per)
hi = n_inst // int(math.isqrt(n_inst)) # cols
print(f"[dream] Query : {query_path.name}")
print(f" → instance {highlight_inst} "
f"(row {highlight_inst // hi}, col {highlight_inst % hi}) "
f"[yellow border in output]")
# ── Render dispatch ────────────────────────────────────────────────────────
def _render():
if mode == 'field':
return render_dream(slots, n_inst, size, t_offset)
elif mode == 'patches':
return render_patches(slots, n_inst, n_slots_per, size, t_offset,
highlight_inst=highlight_inst)
elif mode == 'attractor':
return render_attractor(slots, n_inst, n_slots_per, size,
highlight_inst=highlight_inst)
elif mode == 'sorted':
return render_sorted(slots, n_inst, n_slots_per, size, t_offset,
highlight_inst=highlight_inst)
elif mode == 'reconstruct':
return render_reconstruct(slots, n_inst, n_slots_per, size)
elif mode == 'spectral':
return render_spectral(slots, n_inst, n_slots_per, size)
elif mode == 'retrieve':
raise RuntimeError("retrieve mode handled separately") # sentinel
# ── Warpmap: pixel-displacement diagnostic ──────────────────────────────
if mode == 'warpmap':
if guide_path is None:
print("[dream] --mode warpmap requires --guide <image>", file=sys.stderr)
sys.exit(1)
guide_stem = guide_path.stem[:20].replace(' ', '_')
wmap_out = REPO_ROOT / "pipeline" / f"dream_warpmap_{guide_stem}.png"
R, G, B, canvas = render_warpmap(
guide_path, size, slots, n_inst, n_slots_per, total
)
canvas.save(str(wmap_out))
print(f"[dream] Written : {wmap_out}")
print(f" Left = guide reference")
print(f" Middle = fishnet grid warped by lattice displacement field")
print(f" Bend direction = where pixels in that region travel")
print(f" Right = optical-flow colour (hue=direction, brightness=magnitude)")
print(f" Red=right Green=up-left Cyan=left Magenta=down-right")
return
# ── Retrieve / Recall / Generate: real-image modes ──────────────────────
if mode in ('retrieve', 'recall', 'generate'):
if guide_path is None:
print(f"[dream] --mode {mode} requires --guide <image>", file=sys.stderr)
sys.exit(1)
if tar_path is None:
try:
tar_path = _find_tar()
except FileNotFoundError as e:
print(f"[dream] {e}", file=sys.stderr)
sys.exit(1)
print(f"[dream] Tar : {tar_path.name}")
guide_stem = guide_path.stem[:20].replace(' ', '_')
if mode == 'retrieve':
retr_out = REPO_ROOT / "pipeline" / f"dream_retrieve_{guide_stem}.png"
R, G, B, canvas = render_retrieve(
guide_path, tar_path, n_mosaic, size,
slots, n_inst, n_slots_per, total
)
canvas.save(str(retr_out))
print(f"[dream] Written : {retr_out}")
elif mode == 'recall':
retr_out = REPO_ROOT / "pipeline" / f"dream_recall_{guide_stem}.png"
R, G, B, canvas = render_recall(
guide_path, tar_path, size,
slots, n_inst, n_slots_per, total
)
canvas.save(str(retr_out))
print(f"[dream] Written : {retr_out}")
print(f" Gold border = guide image (top-left)")
print(f" Cyan borders = exact slot-collision partners (lattice recall)")
elif mode == 'generate':
retr_out = REPO_ROOT / "pipeline" / f"dream_generate_{guide_stem}.png"
R, G, B, canvas = render_generate(
guide_path, tar_path, n_mosaic, size,
slots, n_inst, n_slots_per, total
)
canvas.save(str(retr_out))
print(f"[dream] Written : {retr_out}")
print(f" Guide pixels warped by the lattice's spatial memory field.")
print(f" 256 activated instances form a displacement map (dx, dy).")
print(f" Every pixel migrated — recognisable but unique, never before seen.")
return
# ── Derive output filename stem for this mode / query ─────────────────────
if out_path == OUT_DEFAULT:
stem = f"dream_{mode}"
if query_path is not None:
stem += f"_q{highlight_inst}"
out_path = REPO_ROOT / "pipeline" / f"{stem}.ppm"
# ── Terminal ───────────────────────────────────────────────────────────────
if terminal:
print(f"[dream] Rendering {size}x{size} → terminal t={t_offset:.3f}")
R, G, B = _render()
render_terminal(R, G, B, size)
return
# ── Animation ─────────────────────────────────────────────────────────────
if animate > 0:
stem = out_path.stem
suffix = out_path.suffix or ".ppm"
dt = TAU / animate
print(f"[dream] Animating {animate} frames at {size}x{size}")
for frame in range(animate):
t = t_offset + frame * dt
if mode == 'patches':
R, G, B = render_patches(slots, n_inst, n_slots_per, size, t,
highlight_inst=highlight_inst)
elif mode == 'sorted':
R, G, B = render_sorted(slots, n_inst, n_slots_per, size, t,
highlight_inst=highlight_inst)
else:
R, G, B = render_dream(slots, n_inst, size, t)
fp = out_path.parent / f"{stem}_frame_{frame:04d}{suffix}"
write_ppm(fp, R, G, B, size)
sys.stdout.write(f"\r[dream] Frame {frame+1:3d}/{animate} t={t:.3f}")
sys.stdout.flush()
print(f"\n[dream] Done — {animate} frames in {out_path.parent}/")
return
# ── Single frame ──────────────────────────────────────────────────────────
print(f"[dream] Rendering {size}x{size} t={t_offset:.3f} ...")
R, G, B = _render()
write_ppm(out_path, R, G, B, size)
png_written = try_write_png(out_path, R, G, B, size)
view_path = out_path.with_suffix('.png') if png_written else out_path
print(f"[dream] Written : {view_path}")
if mode != 'field':
cols = int(math.isqrt(n_inst))
patch = max(2, size // cols)
n_seeded = sum(
1 for i in range(n_inst)
if any(slots[i * n_slots_per + ch][0] > 0 for ch in range(min(n_slots_per, 4)))
)
fill_pct = 100.0 * n_seeded / n_inst
print(f" Instances seeded : {n_seeded}/{n_inst} ({fill_pct:.1f}%)")
print(f" Patch size : {patch}×{patch} px (grid {cols}×{cols})")
if highlight_inst >= 0:
ri = highlight_inst // cols
ci = highlight_inst % cols
s0 = slots[highlight_inst * n_slots_per]
print(f" Query instance : [{ri},{ci}] "
f"phase={s0[2]:.3f} freq={s0[3]:.3f} "
f"charge={s0[0]}")
print()
print(f" Pictorial outputs (start here):")
print(f" python pipeline/dream_image.py --mode reconstruct --size 512")
print(f" python pipeline/dream_image.py --mode spectral --size 512")
print()
print(f" Topology maps:")
print(f" python pipeline/dream_image.py --mode patches --size 512")
print(f" python pipeline/dream_image.py --mode attractor --size 512")
print(f" python pipeline/dream_image.py --mode sorted --size 512")
print()
print(f" Find where a specific image lives:")
print(f" python pipeline/dream_image.py --mode patches --query <image.jpg>")
if __name__ == "__main__":
main()
Output:
Now we forensically map pixel data to see it come out the other side of the HDGL lattice
Both modes work. Here’s what the three-image run reveals — this is the key architectural insight:
Phase residual tells you memory fidelity:
| Feeder | Slot | Phase residual | Verdict |
|---|---|---|---|
b08b1f0b |
inst=1743 ch=2 | 0.16 rad | Clean cell — this image nearly owns its slot |
b9083d44 |
inst=3426 ch=2 | 2.73 rad | Blurred — heavy collision traffic at this address |
bd6d5251 |
inst=1180 ch=0 | 0.87 rad | Moderate collision |
b08b1f0b is your cleanest feeder for Part 1 demonstration — its slot was barely contested. The IDFT reconstruction for that image should most faithfully represent what the lattice stored. b9083d44 (the grey forum page from yesterday’s analysis) ended up at a high-collision address, which explains why the warpmap produced a boring uniform shift — the slot’s phase has drifted far from the feeder’s own encoding.
What you now have:
--mode recover --guide <feeder.png> — Part 1:
- Left: feeder (reference)
- Middle: IDFT of the feeder’s own 12 phi-Fourier harmonics, coloured by 3 slot channels → the spectral portrait the lattice encoded and can address by content
- Right: 64×64 slot-state heatmap, gold dot = the feeder’s exact slot address
- Phase residual printed: measures how much other images share this slot
--mode compose --guide <feeder.png> --compose-neighbors 16 --compose-t <float> — Part 2:
- Left: feeder (reference)
- Right: novel synthesis — feeder cos_b blended with 16 nearest instances’ stored phases, then drifted by k × omega × t per harmonic → each harmonic rotates away from the original at its own rate, producing something in the spectral family but genuinely new.
dream_image.zip (21.6 KB)

























