Tutorials & Tech

Knowledge Sharing

Fun technology tutorials for a cyberpunk future. We share what we learn about AI content creation, open-source tools, and creative coding — because knowledge wants to be free.

AI Video Creation Animeify Images comma.ai / openpilot Coming Soon
Tutorial

AI Content Creation: Cinematic Cat Videos Without a Camera

Every video on our Instagram is made entirely with AI. No camera, no studio, no film crew. Just text prompts, generative video models, and a Python script that syncs everything to music. Here's how we do it.

AI generated content visualization

The Tools We Use

🎬

Kling AI 2.6

Our primary video generation model. Kling excels at cinematic camera movements, consistent character appearance across clips, and photorealistic output. We use it for hero shots and any scene that needs to look indistinguishable from real footage.

Video Gen Cinematic 5-10s clips
🎦

Runway Gen-4.5

Our go-to for stylized and artistic content. Runway handles abstract visuals, transitions, and creative effects better than any other model. We use it when we want a video to feel more like digital art than photography.

Video Gen Stylized Creative FX
🤖

Google Veo 3.1

Google's latest video model brings exceptional prompt adherence and smooth motion. We use Veo for scenes requiring precise actions (a cat turning its head, walking toward camera) where following the prompt exactly matters most.

Video Gen Precise Smooth Motion

The Workflow

Step 1: Script & Storyboard

Every video starts with a concept. We write a shot list describing each scene: the subject, camera angle, lighting, mood, and movement. This becomes the prompt sheet. A 60-second video typically needs 8-12 individual clips, each 4-8 seconds long.

Shot 01: "Close-up of a white Persian cat, neon green eyes, looking directly at camera, shallow depth of field, cinematic lighting, 4K"

Shot 02: "Slow dolly forward through a cyberpunk alley, rain, neon signs reflected in puddles, a white cat sitting on a crate, moody atmosphere"
🎬

Step 2: Generate Clips

We feed each prompt into the appropriate AI model. Kling for photorealistic hero shots, Runway for stylized sequences, Veo for precise movements. Each prompt usually generates 3-5 variations. We cherry-pick the best take for each shot, looking for visual consistency, motion quality, and adherence to the storyboard.

Pro tip: generating multiple variations is key. AI video models are non-deterministic — the same prompt produces different results each time. We typically generate 3-5 takes per shot and pick the best one. Think of it like film production: you wouldn't use the first take of every scene.

🎵

Step 3: Beat-Synced Editing with Python

This is where it gets technical. We wrote a Python script that analyzes a music track, detects beats using librosa, and automatically cuts between clips on beat drops. The script uses moviepy for video editing and handles crossfades, speed ramps, and color grading. The result: cinematic, music-synced videos that feel professionally edited.

# Simplified beat-sync workflow
import librosa
import moviepy.editor as mp

# Detect beats in the audio track
y, sr = librosa.load("track.mp3")
tempo, beats = librosa.beat.beat_track(y=y, sr=sr)
beat_times = librosa.frames_to_time(beats)

# Cut clips on each beat
for i, t in enumerate(beat_times):
  clip = clips[i % len(clips)]
  clip = clip.subclip(0, duration)
  timeline.append(clip)
🚀

Step 4: Polish & Publish

Final touches include color grading for visual consistency across clips, adding text overlays, adjusting audio levels, and exporting at the right resolution for each platform. Instagram Reels, TikTok, and YouTube Shorts all have different optimal specs. We batch-export from the Python pipeline to hit all platforms in one run.

Why Python over Premiere/DaVinci?

Repeatability. Once the script is written, creating a new video takes minutes instead of hours. Change the music track, swap in new clips, and the script handles all the timing automatically. It's the cyberpunk way — automate everything.

"We're not filmmakers. We're not video editors. We're cat breeders who learned Python and figured out how to make AI do the heavy lifting. If we can do it, you can too."

— Persian Punks
Tutorial

Animeify Images with Python

Turn any photo into anime-style art using PyTorch and AnimeGAN — a lightweight generative adversarial network that transforms real images into anime versions. No LLM required, no API calls, no subscription fees. Just open-source machine learning running on your own hardware.

Anime style digital art

How It Works

AnimeGAN is a generative adversarial network trained specifically on anime art styles. It learns the visual patterns of anime (bold outlines, flat colors, stylized shading) and applies them to real photographs. The model runs entirely locally — your images never leave your machine.

PyTorch GAN Open Source

What You Need

Python 3.8+, PyTorch, and a GPU with at least 4GB VRAM (CPU works too, just slower). The AnimeGAN model weights are freely available and only around 8MB. No cloud API, no tokens, no ongoing costs. Install the dependencies, download the weights, and you're running in under five minutes.

Python 3.8+ PyTorch GPU Optional

Try It Right Now

We built a browser-based demo so you can test AnimeGAN without installing anything. Upload a photo, pick an anime style, and see the result in seconds. It runs client-side in your browser — your images stay private.

Launch Animeify Demo
# Quick start — animeify an image
import torch
from PIL import Image
from torchvision import transforms
from model import Generator

# Load the pretrained AnimeGAN model
model = Generator()
model.load_state_dict(torch.load("animeGAN_weights.pth"))
model.eval()

# Transform and process your image
img = Image.open("my_cat.jpg")
tensor = transforms.ToTensor()(img).unsqueeze(0)
result = model(tensor)

# Save the anime version
output = transforms.ToPILImage()(result.squeeze(0))
output.save("my_cat_anime.jpg")
Open Source

comma.ai / openpilot

openpilot is an open-source driver assistance system developed by comma.ai. It provides adaptive cruise control and lane centering for over 300 supported car models — essentially giving your existing car semi-autonomous driving capabilities for a fraction of the cost of factory ADAS upgrades.

Car dashboard with driver assistance technology

What It Does

openpilot provides two core features: adaptive cruise control (maintains speed and following distance) and lane centering (keeps your car centered in the lane). On supported cars, it handles highway driving with minimal driver input. It uses a camera-first approach, similar to Tesla's vision system, but fully open source.

The Hardware

comma.ai sells the comma 3X, a dedicated device that mounts behind your rearview mirror and plugs into your car's OBD-II port. It runs openpilot, records driving data, and processes everything on-device. The entire system is designed to be installed in under 30 minutes with no permanent modifications to your car.

Why We Care

Open-source self-driving software is the most cyberpunk technology on the planet right now. A community of hackers and engineers building driver assistance that rivals billion-dollar OEM systems — and giving it away for free. We use openpilot daily on our drives across Texas, and it is genuinely impressive.

"The comma 3X is the best driving companion we own, and openpilot gets better with every update. It's open source, it's community-driven, and it turns a regular car into something that feels like the future."

— Persian Punks

Coming Soon

More tutorials and deep-dives in the pipeline. Follow us for updates.

💻

Local LLMs with Ollama

Run large language models locally on your own hardware. No cloud, no API keys, no data leaving your machine. We'll cover setup, model selection, and practical use cases.

Coming Soon
🎨

AI Image Generation Deep Dive

A comprehensive guide to creating consistent characters and scenes with Flux, Midjourney, and Stable Diffusion. Prompt engineering, LoRA training, and workflow automation.

Coming Soon
🔐

Privacy-First Tech Stack

The tools and services we use to stay private online. From ProtonMail to VPNs, encrypted messaging, and privacy-respecting alternatives to big tech services.

Coming Soon

Follow @persian.punks for More Tech Content

We share tutorials, AI experiments, behind-the-scenes content creation, and of course — Persian cats. The cyberpunk cat community is growing.

Follow on Instagram Get in Touch