About Signal Lens

Signal Lens is an AI knowledge hub and interactive learning space from one practitioner's perspective, built in collaboration with AI tools and agents. It is designed as a practical, curated lens for real work, not an all-encompassing guide to everything in AI.

What you'll find here

The site is organised around six sections, each covering a different layer of working with AI:

Engineering Guides

Implementation-focused guides for builders, from low-code starts to advanced engineering workflows.

Tools

Reviews of AI tools across categories: coding assistants, chat interfaces, image and video generators, audio tools, local models, and APIs. Ratings combine my own testing with AI-assisted research (including deep research), broader market signal, and feedback from people using these tools in practice. Firsthand testing depth varies by entry, and for some tools testing has been limited or not yet hands-on.

Models

Structured profiles of major AI models covering capabilities, context windows, pricing, API access, and what each model is actually good for.

Prompts

Prompt templates for development, analysis, and creative work — with variable slots, usage notes, and example output. Think of them as starter patterns for inspiration, then adapt them or ask your preferred LLM to generate versions tailored to your context.

Concepts

Interactive explainers of how AI works under the hood. Attention mechanisms, tokenization, embeddings, transformers, RAG, and more — each with live visualizations you can explore directly in the browser.

Use Cases

Real-world AI workflow patterns documented from practice: architecture decisions, database migrations, backend performance, and other applied patterns. These are representative examples rather than a complete inventory, and you can use your preferred LLM to shape them into workflows specific to your domain and constraints.

How it's built

Signal Lens is itself an AI-collaborative project — designed, built, and maintained with a mix of tools, especially OpenAI Codex and Claude Code, plus direct hand coding. The site is static (Astro + React islands), deployed to Cloudflare Pages, and every piece of content is a Markdown file in a Git repository. No CMS, no database — just files and code.

The interactive concept visualizations are built with Three.js and React. The content schemas are type-safe with Zod. Every push to the main branch triggers an automatic deployment.

I also use other models and tools for research and specialized tasks. Trying different tools in real work is intentional: without hands-on testing, it's hard to understand real capabilities, tradeoffs, and where each tool actually fits.

A living project

The AI landscape moves fast. Models change, tools appear and disappear, and understanding deepens. This site is updated continuously to reflect what's actually current — not as a publication, but as a working reference.