← All Work
01 — AI Prototyping · Maven Course

After Dark

Walk around with your phone anywhere in the world and get a story back. After Dark uses your real location to generate AI narratives, illustrations, and voice narration — one place at a time.

View Live App
AI Prototyping for Designers · Anna Arteeva
See it in action —Open Live App
4
AI / API Layers Integrated
7
Story Styles Available
100–200
Words Per Generated Story
1
Working App Shipped
/ The Concept

Most location apps tell you facts. After Dark tells you stories. The app detects your location, passes it as context to an LLM, and returns a narrative — styled as a local legend, travel vignette, spooky tale, or children's story — alongside a generated illustration and voice narration.

/ What I Built

A mobile-first React app built in Lovable with four integrated layers: geolocation detection, AI story generation via Gemini LLM, image generation tied to story content, and ElevenLabs voice narration with a play/pause control. Stories save to a Supabase database with image URLs stored in cloud storage.

/ Technical Architecture

Four APIs. One prompt.

The design challenge wasn't just building features — it was making four independent APIs feel like one seamless experience. Geolocation feeds the LLM. The LLM output feeds the image generator. The image and text together feed ElevenLabs. Each layer depends on the last.

Step 01
Geolocation API
Step 02
Gemini LLM
Step 03
Image Generation
Step 04
ElevenLabs Voice
/ RICECO Framework

The system prompt is the product.

Story quality lives or dies in the system prompt. Using the RICECO framework — Role, Instruction, Context, Examples, Constraints, Output Format — I engineered a stable, versioned prompt that accepts dynamic location and user preferences as inputs without modifying the core behavior. The result: consistent story quality across wildly different locations and styles.

/ Selected screens02

/ Click any screen to expand

01 · Location Permission — Onboarding
02 · Story Card — Generated Narrative + Illustration
/ Outcomes
01

Full Stack Shipped

Geolocation. LLM. Image Gen. Voice. All live.

A working mobile app integrating geolocation, LLM text generation, AI image generation, and voice narration — built and deployed in a single course session.

02

Prompt Engineering in Practice

RICECO framework, applied end-to-end.

Applied the RICECO framework to engineer a stable system prompt that produces consistent, styled narratives across any location input — with dynamic content passed as context, never baked into the prompt.

03

Multi-Modal AI Stack

Four APIs chained into one experience.

Geolocation API → Gemini LLM (narrative) → Image Generation (illustration) → ElevenLabs (voice narration). Four independent services chained so the output of each layer becomes the input of the next.

/ What I Learned
Want to see it live?Open After Dark