After Dark
Walk around with your phone anywhere in the world and get a story back. After Dark uses your real location to generate AI narratives, illustrations, and voice narration — one place at a time.
Most location apps tell you facts. After Dark tells you stories. The app detects your location, passes it as context to an LLM, and returns a narrative — styled as a local legend, travel vignette, spooky tale, or children's story — alongside a generated illustration and voice narration.
A mobile-first React app built in Lovable with four integrated layers: geolocation detection, AI story generation via Gemini LLM, image generation tied to story content, and ElevenLabs voice narration with a play/pause control. Stories save to a Supabase database with image URLs stored in cloud storage.
Four APIs. One prompt.
The design challenge wasn't just building features — it was making four independent APIs feel like one seamless experience. Geolocation feeds the LLM. The LLM output feeds the image generator. The image and text together feed ElevenLabs. Each layer depends on the last.
The system prompt is the product.
Story quality lives or dies in the system prompt. Using the RICECO framework — Role, Instruction, Context, Examples, Constraints, Output Format — I engineered a stable, versioned prompt that accepts dynamic location and user preferences as inputs without modifying the core behavior. The result: consistent story quality across wildly different locations and styles.
- RRole
Defines the model as a location-aware storyteller — not a search engine, not a tour guide.
- IInstruction
What the model must do every call: produce a 100–200 word narrative tied to the supplied location.
- CContext
Dynamic location data and selected story style are injected here — never hardcoded into the prompt.
- EExamples
Few-shot samples lock tone and pacing so a 'spooky tale' never reads like a 'travel vignette'.
- CConstraints
Forbids facts, dates, or claims that can hallucinate — keeps stories evocative, not informational.
- OOutput Format
Strict shape: title, body, image prompt — so downstream APIs can consume it without parsing logic.
/ Click any screen to expand
Full Stack Shipped
Geolocation. LLM. Image Gen. Voice. All live.
A working mobile app integrating geolocation, LLM text generation, AI image generation, and voice narration — built and deployed in a single course session.
Prompt Engineering in Practice
RICECO framework, applied end-to-end.
Applied the RICECO framework to engineer a stable system prompt that produces consistent, styled narratives across any location input — with dynamic content passed as context, never baked into the prompt.
Multi-Modal AI Stack
Four APIs chained into one experience.
Geolocation API → Gemini LLM (narrative) → Image Generation (illustration) → ElevenLabs (voice narration). Four independent services chained so the output of each layer becomes the input of the next.
- The system prompt is a design artifact.
Writing the RICECO prompt felt more like UX writing than coding. Every word affects output quality — tone, length, what the model refuses to do, what it always includes. It deserves the same rigor as a design spec.
- Chaining APIs is an architecture problem.
Four working APIs in isolation is easy. Making them feel like one product requires thinking about failure states, load sequencing, and what happens when one layer is slow. That's a design problem as much as a technical one.
- Ship a working app, not a prototype.
The gap between a Figma mockup and a working app that uses real AI APIs is where real learning happens. This course forced that gap closed.